The Morning After: Drones that can charge on power lines

Battery life always limits a drone’s ability to perform tasks and get anywhere. So why not let it slurp from nearby power lines? (Well, there are reasons.) 

Researchers at the University of Southern Denmark attached a gripper system to a Tarot 650 Sport drone, which they customized with an electric quadcopter propulsion system and an autopilot module. An inductive charger pulls current from the power line, enabling it to recharge five times over two hours during tests. The benefit here is that power lines already exist (duh), but there is the real concern that a drone could damage a line and knock out electricity for thousands.

— Mat Smith

The biggest stories you might have missed

DJI’s RS4 gimbals make it easier to balance heavy cameras and accessories

Apple Vision Pro, two months later

Kobo’s new ereaders include its first with color displays

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

The owner of WordPress has bought Beeper, that brazen messaging app

It challenged Apple and lost almost immediately.

WordPress and Tumblr owner Automattic has bought Beeper, the maker of the Beeper Mini app, which challenged Apple with iMessage tricks on Android phones, late last year. Although it ultimately lost its only USP when Apple blocked the exploit — mere days later — the incident gave the DOJ more ammunition in its antitrust suit against Apple. Bloomberg reported on Tuesday that Automattic paid $125 million. It’s a lot of money, especially when Automattic already owns a messaging app, Texts. No, I hadn’t heard of it either.

Continue reading.

Starlink terminals are reportedly being used by Russian forces in Ukraine

There’s a thriving black market for satellite-based internet providers.

TMA
Reuters

According to a report by The Wall Street Journal, Russian forces in Ukraine are using Starlink satellite internet terminals to coordinate attacks in eastern Ukraine and Crimea as well as to control drones and other forms of military tech. The Starlink hardware is reaching Russian forces via a complex network of black-market sellers. After reports in February that Russian forces were using Starlink, US House Democrats demanded Musk act, noting Russian military use of the tech is “potentially in violation of US sanctions and export controls.” Starlink can disable individual terminals.

Continue reading.

Congress looks into blocking piracy sites in the US

The Motion Picture Association will work with politicians.

The Motion Picture Association chair and CEO Charles Rivkin has revealed a plan to make “sailing the digital seas,” so streaming or downloading pirated content, harder. Rivkin said the association is going to work with Congress to establish and enforce site-blocking legislation in the United States. He added that almost 60 countries use site-blocking as a tool against piracy.

Continue reading.

You can now lie down while using a Meta Quest 3 headset

Finally.

Shh, relax… And strap two screens to your face.

Relaaaaax.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-drones-that-can-charge-on-power-lines-111517677.html?src=rss

The Morning After: Drones that can charge on power lines

Battery life always limits a drone’s ability to perform tasks and get anywhere. So why not let it slurp from nearby power lines? (Well, there are reasons.) 

Researchers at the University of Southern Denmark attached a gripper system to a Tarot 650 Sport drone, which they customized with an electric quadcopter propulsion system and an autopilot module. An inductive charger pulls current from the power line, enabling it to recharge five times over two hours during tests. The benefit here is that power lines already exist (duh), but there is the real concern that a drone could damage a line and knock out electricity for thousands.

— Mat Smith

The biggest stories you might have missed

DJI’s RS4 gimbals make it easier to balance heavy cameras and accessories

Apple Vision Pro, two months later

Kobo’s new ereaders include its first with color displays

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

The owner of WordPress has bought Beeper, that brazen messaging app

It challenged Apple and lost almost immediately.

WordPress and Tumblr owner Automattic has bought Beeper, the maker of the Beeper Mini app, which challenged Apple with iMessage tricks on Android phones, late last year. Although it ultimately lost its only USP when Apple blocked the exploit — mere days later — the incident gave the DOJ more ammunition in its antitrust suit against Apple. Bloomberg reported on Tuesday that Automattic paid $125 million. It’s a lot of money, especially when Automattic already owns a messaging app, Texts. No, I hadn’t heard of it either.

Continue reading.

Starlink terminals are reportedly being used by Russian forces in Ukraine

There’s a thriving black market for satellite-based internet providers.

TMA
Reuters

According to a report by The Wall Street Journal, Russian forces in Ukraine are using Starlink satellite internet terminals to coordinate attacks in eastern Ukraine and Crimea as well as to control drones and other forms of military tech. The Starlink hardware is reaching Russian forces via a complex network of black-market sellers. After reports in February that Russian forces were using Starlink, US House Democrats demanded Musk act, noting Russian military use of the tech is “potentially in violation of US sanctions and export controls.” Starlink can disable individual terminals.

Continue reading.

Congress looks into blocking piracy sites in the US

The Motion Picture Association will work with politicians.

The Motion Picture Association chair and CEO Charles Rivkin has revealed a plan to make “sailing the digital seas,” so streaming or downloading pirated content, harder. Rivkin said the association is going to work with Congress to establish and enforce site-blocking legislation in the United States. He added that almost 60 countries use site-blocking as a tool against piracy.

Continue reading.

You can now lie down while using a Meta Quest 3 headset

Finally.

Shh, relax… And strap two screens to your face.

Relaaaaax.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-drones-that-can-charge-on-power-lines-111517677.html?src=rss

Starlink terminals are reportedly being used by Russian forces in Ukraine

Starlink satellite internet terminals are being widely used by Russian forces in Ukraine, according to a report by The Wall Street Journal. The publication indicates that the terminals, which were developed by Elon Musk’s SpaceX, are being used to coordinate attacks in eastern Ukraine and Crimea. Additionally, Starlink terminals can be used on the battlefield to control drones and other forms of military tech.

The terminals are reaching Russian forces via a complex network of black market sellers. This is despite the fact that Starlink devices are banned in the country. WSJ followed some of these sellers as they smuggled the terminals into Russia and even made sure deliveries got to the front lines. Reporting also indicates that some of the terminals were originally purchased on eBay.

This black market for Starlink terminals allegedly stretches beyond occupied Ukraine and into Sudan. Many of these Sudanese dealers are reselling units to the Rapid Support Forces, a paramilitary group that’s been accused of committing atrocities like ethnically motivated killings, targeted abuse of human rights activists, sexual violence and the burning of entire communities. WSJ notes that hundreds of terminals have found their way to members of the Rapid Support Forces.

Back in February, Elon Musk addressed earlier reports that Starlink terminals were being used by Russian soldiers in the war against Ukraine. “To the best of our knowledge, no Starlinks have been sold directly or indirectly to Russia,” he wrote on X. The Kremlin also denied the reports, according to Reuters. Despite these proclamations, WSJ says that “thousands of the white pizza-box-sized devices” have landed with “some American adversaries and accused war criminals.”

After those February reports, House Democrats have demanded that Musk take action, according to Business Insider, noting that Russian military use of the tech is “potentially in violation of US sanctions and export controls.” Starlink actually has the ability to disable individual terminals and each item includes geofencing technology that is supposed to prevent use in unauthorized countries, though it's unclear if black market sellers can get around these hurdles.

AHouse Democrats have demanded that Musk take action, ar. He took steps to limit Ukraine’s use of the technology on the grounds that the terminals were never intended for use in military conflicts. According to his biography, Musk also blocked Ukraine’s use of Starlink near Crimea early in the conflict, ending the country’s plans for an attack on Russia’s naval fleet. Mykhailo Podolyak, an advisor to Ukrainian President Volodymyr Zelensky, wrote on X that “civilians, children are being killed” as a result of Musk’s decision. He further dinged the billionaire by writing “this is the price of a cocktail of ignorance and a big ego.”

However, Musk fired back and said that Starlink was never active in the area near Crimea, so there was nothing to disable. He also said that the policy in question was decided upon before Ukraine’s planned attack on the naval fleet. Ukraine did lose access to more than 1,300 Starlink terminals in the early days of the conflict due to a payment issue. SpaceX reportedly charged Ukraine $2,500 per month to keep each unit operational, which ballooned to $3.25 million per month. This pricing aligns with the company’s high cost premium plan. It’s worth noting that SpaceX has donated more than 3,600 terminals to Ukraine.

SpaceX has yet to comment on the WSJ report regarding the blackmarket proliferation of Starlink terminals. We’ll update this post when it does.

This article originally appeared on Engadget at https://www.engadget.com/starlink-terminals-are-reportedly-being-used-by-russian-forces-in-ukraine-154832503.html?src=rss

Starlink terminals are reportedly being used by Russian forces in Ukraine

Starlink satellite internet terminals are being widely used by Russian forces in Ukraine, according to a report by The Wall Street Journal. The publication indicates that the terminals, which were developed by Elon Musk’s SpaceX, are being used to coordinate attacks in eastern Ukraine and Crimea. Additionally, Starlink terminals can be used on the battlefield to control drones and other forms of military tech.

The terminals are reaching Russian forces via a complex network of black market sellers. This is despite the fact that Starlink devices are banned in the country. WSJ followed some of these sellers as they smuggled the terminals into Russia and even made sure deliveries got to the front lines. Reporting also indicates that some of the terminals were originally purchased on eBay.

This black market for Starlink terminals allegedly stretches beyond occupied Ukraine and into Sudan. Many of these Sudanese dealers are reselling units to the Rapid Support Forces, a paramilitary group that’s been accused of committing atrocities like ethnically motivated killings, targeted abuse of human rights activists, sexual violence and the burning of entire communities. WSJ notes that hundreds of terminals have found their way to members of the Rapid Support Forces.

Back in February, Elon Musk addressed earlier reports that Starlink terminals were being used by Russian soldiers in the war against Ukraine. “To the best of our knowledge, no Starlinks have been sold directly or indirectly to Russia,” he wrote on X. The Kremlin also denied the reports, according to Reuters. Despite these proclamations, WSJ says that “thousands of the white pizza-box-sized devices” have landed with “some American adversaries and accused war criminals.”

After those February reports, House Democrats have demanded that Musk take action, according to Business Insider, noting that Russian military use of the tech is “potentially in violation of US sanctions and export controls.” Starlink actually has the ability to disable individual terminals and each item includes geofencing technology that is supposed to prevent use in unauthorized countries, though it's unclear if black market sellers can get around these hurdles.

AHouse Democrats have demanded that Musk take action, ar. He took steps to limit Ukraine’s use of the technology on the grounds that the terminals were never intended for use in military conflicts. According to his biography, Musk also blocked Ukraine’s use of Starlink near Crimea early in the conflict, ending the country’s plans for an attack on Russia’s naval fleet. Mykhailo Podolyak, an advisor to Ukrainian President Volodymyr Zelensky, wrote on X that “civilians, children are being killed” as a result of Musk’s decision. He further dinged the billionaire by writing “this is the price of a cocktail of ignorance and a big ego.”

However, Musk fired back and said that Starlink was never active in the area near Crimea, so there was nothing to disable. He also said that the policy in question was decided upon before Ukraine’s planned attack on the naval fleet. Ukraine did lose access to more than 1,300 Starlink terminals in the early days of the conflict due to a payment issue. SpaceX reportedly charged Ukraine $2,500 per month to keep each unit operational, which ballooned to $3.25 million per month. This pricing aligns with the company’s high cost premium plan. It’s worth noting that SpaceX has donated more than 3,600 terminals to Ukraine.

SpaceX has yet to comment on the WSJ report regarding the blackmarket proliferation of Starlink terminals. We’ll update this post when it does.

This article originally appeared on Engadget at https://www.engadget.com/starlink-terminals-are-reportedly-being-used-by-russian-forces-in-ukraine-154832503.html?src=rss

Senators ask intelligence officials to declassify details about TikTok and ByteDance

As the Senate considers the bill that would force a sale or ban of TikTok, lawmakers have heard directly from intelligence officials about the alleged national security threat posed by the app. Now, two prominent senators are asking the office of the Director of National Intelligence to declassify and make public what they have shared.

“We are deeply troubled by the information and concerns raised by the intelligence community in recent classified briefings to Congress,” Democratic Senators Richard Blumenthal and Republican Senator Marsha Blackburn write. “It is critically important that the American people, especially TikTok users, understand the national security issues at stake.”

The exact nature of the intelligence community's concerns about the app has long been a source of debate. Lawmakers in the House received a similar briefing just ahead of their vote on the bill. But while the briefing seemed to bolster support for the measure, some members said they left unconvinced, with one lawmaker saying that “not a single thing that we heard … was unique to TikTok.”

According to Axios, some senators described their briefing as “shocking,” though the group isn’t exactly known for their particularly nuanced understanding of the tech industry. (Blumenthal, for example, once pressed Facebook executives on whether they would “commit to ending finsta.”) In its report, Axios says that one lawmaker “said they were told TikTok is able to spy on the microphone on users' devices, track keystrokes and determine what the users are doing on other apps.” That may sound alarming, but it’s also a description of the kinds of app permissions social media services have been requesting for more than a decade.

TikTok has long denied that its relationship with parent company ByteDance would enable Chinese government officials to interfere with its service or spy on Americans. And so far, there is no public evidence that TikTok has ever been used in this way. If US intelligence officials do have evidence that is more than hypothetical, it would be a major bombshell in the long-running debate surrounding the app.

This article originally appeared on Engadget at https://www.engadget.com/senators-ask-intelligence-officials-to-declassify-details-about-tiktok-and-bytedance-180655697.html?src=rss

The Pentagon used Project Maven-developed AI to identify air strike targets

The US military has ramped up its use of artificial intelligence tools after the October 7 Hamas attacks on Israel, based on a new report by Bloomberg. Schuyler Moore, US Central Command's chief technology officer, told the news organization that machine learning algorithms helped the Pentagon identify targets for more than 85 air strikes in the Middle East this month. 

US bombers and fighter aircraft carried out those air strikes against seven facilities in Iraq and Syria on February 2, fully destroying or at least damaging rockets, missiles, drone storage facilities and militia operations centers. The Pentagon had also used AI systems to find rocket launchers in Yemen and surface combatants in the Red Sea, which it had then destroyed through multiple air strikes in the same month.

The machine learning algorithms used to narrow down targets were developed under Project Maven, Google's now-defunct partnership the Pentagon. To be precise, the project entailed the use of Google's artificial intelligence technology by the US military to analyze drone footage and flag images for further human review. It caused an uproar among Google employees: Thousands had petitioned the company to end its partnership with Pentagon, and some even quit over its involvement altogether. A few months after that employee protest, Google decided not to renew its contract, which had ended in 2019. 

Moore told Bloomberg that US forces in the Middle East haven't stopped experimenting with the use of algorithms to identify potential targets using drone or satellite imagery even after Google ended its involvement. The military has been testing out their use over the past year in digital exercises, she said, but it started using targeting algorithms in actual operations after the October 7 Hamas attacks. She clarified, however, that human workers constantly checked and verified the AI systems' target recommendations. Human personnel were also the ones who proposed how to stage the attacks and which weapons to use. "There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step," she said. "Every step that involves AI has a human checking in at the end."

This article originally appeared on Engadget at https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html?src=rss

The Pentagon used Project Maven-developed AI to identify air strike targets

The US military has ramped up its use of artificial intelligence tools after the October 7 Hamas attacks on Israel, based on a new report by Bloomberg. Schuyler Moore, US Central Command's chief technology officer, told the news organization that machine learning algorithms helped the Pentagon identify targets for more than 85 air strikes in the Middle East this month. 

US bombers and fighter aircraft carried out those air strikes against seven facilities in Iraq and Syria on February 2, fully destroying or at least damaging rockets, missiles, drone storage facilities and militia operations centers. The Pentagon had also used AI systems to find rocket launchers in Yemen and surface combatants in the Red Sea, which it had then destroyed through multiple air strikes in the same month.

The machine learning algorithms used to narrow down targets were developed under Project Maven, Google's now-defunct partnership the Pentagon. To be precise, the project entailed the use of Google's artificial intelligence technology by the US military to analyze drone footage and flag images for further human review. It caused an uproar among Google employees: Thousands had petitioned the company to end its partnership with Pentagon, and some even quit over its involvement altogether. A few months after that employee protest, Google decided not to renew its contract, which had ended in 2019. 

Moore told Bloomberg that US forces in the Middle East haven't stopped experimenting with the use of algorithms to identify potential targets using drone or satellite imagery even after Google ended its involvement. The military has been testing out their use over the past year in digital exercises, she said, but it started using targeting algorithms in actual operations after the October 7 Hamas attacks. She clarified, however, that human workers constantly checked and verified the AI systems' target recommendations. Human personnel were also the ones who proposed how to stage the attacks and which weapons to use. "There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step," she said. "Every step that involves AI has a human checking in at the end."

This article originally appeared on Engadget at https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html?src=rss

OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

In a statement to Engadget, an OpenAI spokesperson admitted that the company is already working with the US Department of Defense. "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property," the spokesperson said. "There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under 'military' in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

Update, January 14 2024, 10:22AM ET: This story has been updated to include a statement from OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/openais-policy-no-longer-explicitly-bans-the-use-of-its-technology-for-military-and-warfare-123018659.html?src=rss

OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

In a statement to Engadget, an OpenAI spokesperson said, "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

Update, January 14 2024, 10:22AM ET: This story has been updated to include a statement from OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/openais-policy-no-longer-explicitly-bans-the-use-of-its-technology-for-military-and-warfare-123018659.html?src=rss

Researchers posed as foreign actors, and data brokers sold them information on military servicemembers anyway

Third parties selling our personal data is annoying. But for certain sensitive populations like military service members, the selling of that information could quickly become a national security threat. Researchers at Duke University released a study on Monday tracking what measures data brokers have in place to prevent unidentified or potentially malign actors from buying personal data on members of the military. As it turns out, the answer is often few to none — even when the purchaser is actively posing as a foreign agent.

A 2021 Duke study by the same lead researcher revealed that data brokers advertised that they had access to — and were more than happy to sell —information on US military personnel. In this more recent study researchers used wiped computers, VPNs, burner phones bought with cash and other means of identity obfuscation to go undercover. They scraped the websites of data brokers to see which were likely to have available data on servicemembers. Then they attempted to make those purchases, posing as two entities: datamarketresearch.org and dataanalytics.asia. With little-or-no vetting, several of the brokers transferred the requested data not only to the presumptively Chicago-based datamarketresearch, but also to the server of the .asia domain which was located in Singapore. The records only cost between 12 to 32 cents a piece.

The sensitive information included health records and financial information. Location data was also available, although the team at Duke decided not to purchase that — though it's not clear if this was for financial or ethical reasons. “Access to this data could be used by foreign and malicious actors to target active-duty military personnel, veterans, and their families and acquaintances for profiling, blackmail, targeting with information campaigns, and more,” the report cautions. At an individual level, this could also include identity theft or fraud.

This gaping hole in our national security apparatus is due in large part to the absence of comprehensive federal regulations governing either individual data privacy, or much of the business practices engaged in by data brokers. Senators Elizabeth Warren, Bill Cassidy and Marco Rubio introduced the Protecting Military Service Members' Data Act in 2022 to give power to the Federal Trade Commission to prevent data brokers from selling military personnel information to adversarial nations. They reintroduced the bill in March 2023 after it stalled out. Despite bipartisan support, it still hasn’t made it past the introduction phase.

This article originally appeared on Engadget at https://www.engadget.com/researchers-posed-as-foreign-actors-and-data-brokers-sold-them-information-on-military-servicemembers-anyway-120038192.html?src=rss