OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

In a statement to Engadget, an OpenAI spokesperson admitted that the company is already working with the US Department of Defense. "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property," the spokesperson said. "There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under 'military' in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

Update, January 14 2024, 10:22AM ET: This story has been updated to include a statement from OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/openais-policy-no-longer-explicitly-bans-the-use-of-its-technology-for-military-and-warfare-123018659.html?src=rss

OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

In a statement to Engadget, an OpenAI spokesperson said, "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

Update, January 14 2024, 10:22AM ET: This story has been updated to include a statement from OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/openais-policy-no-longer-explicitly-bans-the-use-of-its-technology-for-military-and-warfare-123018659.html?src=rss

New Department of Labor rule could reclassify countless gig workers as employees

The US Department of Labor (DOL) published a final rule to the Federal Register on Wednesday that would increase the difficulty of classifying workers as independent contractors. If the rule survives court challenges unscathed, it will replace a business-friendly Trump-era regulation that did the opposite. It’s scheduled to go into effect on March 11.

The new rule, first proposed in 2022, could have profound implications for companies like Uber and DoorDash that rely heavily on gig workers. It would mandate that workers who are “economically dependent” on a company be considered employees.

The rule restores a pre-Trump precedent of using six factors to determine workers’ classification. These include their opportunity for profit or loss, the financial stake and nature of resources the worker has invested in the work, the work relationship’s permanence, the employer’s degree of control over the person’s work, how essential the person’s work is to the employer’s business and the worker’s skill and initiative.

In its decision to publish the new guidance, the DOL cites a “longstanding precedent” in the courts predating the Trump administration’s hard right turn. “A century of labor protections for working people is premised on the employer-employee relationship,” Acting Labor Secretary Julie Su said in a press call with Bloomberg.

“Misclassifying employees as independent contractors is a serious issue that deprives workers of basic rights and protections,” Su wrote in the announcement post. “This rule will help protect workers, especially those facing the greatest risk of exploitation, by making sure they are classified properly and that they receive the wages they’ve earned.”

Uber Eats and Deliveroo takeaway delivery cycle couriers on 30th March 2023 in London, United Kingdom. Uber Eats is an online food ordering and delivery platform launched by Uber in 2014. It acts as an intermediary between independent takeaway food outlets and customers, with thousands of cycle couriers delivering food by bicycle and other forms of transport. Gig workers are independent contractors, online platform workers, contract firm workers, on-call workers and temporary workers. Gig workers enter into formal agreements with on-demand companies to provide services to the companys clients. (photo by Mike Kemp/In Pictures via Getty Images)
Mike Kemp via Getty Images

If the rule takes effect, it’s expected to increase employer costs. The US Chamber of Commerce, a non-government lobby for business interests, unsurprisingly opposes it. “It is likely to threaten the flexibility of individuals to work when and how they want and could have significant negative impacts on our economy,” Marc Freedman, VP of the US Chamber of Commerce, said in a statement to Reuters.

DoorDash sounds optimistic that the rule wouldn’t apply to its workforce. “We are confident that Dashers are properly classified as independent contractors under the FLSA, and we do not anticipate this rule causing changes to our business,” the company wrote in a statement. “We will continue to engage with the Department of Labor, Congress, and other stakeholders to find solutions that ensure Dashers maintain their flexibility while gaining access to new benefits and protections.”

Groups with similar views are expected to mount legal challenges to the rule before it goes into effect. A previous attempt by the Biden Administration to void the Trump-era rules met such a fate when a federal judge blocked the DOL’s reversal.

Although the most prominent theoretical applications of the rule would be with gig economy apps like DoorDash, Lyft and Uber, it could stretch to sectors including healthcare, trucking and construction. “The department is seeing misclassifications in places it hasn’t seen it before,” Wage and Hour Division Administrator Jessica Looma said to Bloomberg on Monday. “Health care, construction, janitorial, and even restaurant workers who are often living paycheck to paycheck are some of the most vulnerable workers.”

This article originally appeared on Engadget at https://www.engadget.com/new-department-of-labor-rule-could-reclassify-countless-gig-workers-as-employees-130836919.html?src=rss

Apple reportedly faces pressure in India after sending out warnings of state-sponsored hacking

Indian authorities allied with Prime Minister Narendra Modi have questioned Apple on the accuracy of its internal threat algorithms and are now investigating the security of its devices, according to The Washington Post. Officials apparently targeted the company after it warned journalists and opposition politicians that state-sponsored hackers may have infiltrated their devices back in October. While Apple is under scrutiny for its security measures in the eyes of the public, the Post says government officials were more upfront with what they wanted behind closed doors. 

They reportedly called up the company's representatives in India to pressure Apple into finding a way to soften the political impact of its hacking warnings. The officials also called in an Apple security expert to conjure alternative explanations for the warnings that they could tell people — most likely one that doesn't point to the government as the possible culprit. 

The journalists and politicians who posted about Apple's warnings on social media had one thing in common: They were all critical of Modi's government. Amnesty International examined the phone of one particular journalist named Anand Mangnale who was investigating long-time Modi ally Gautam Adani and found that an attacker had planted the Pegasus spyware on his Apple device. While Apple didn't explicitly say that the Indian government is to blame for the attacks, Pegasus, developed by the Israeli company NSO Group, is mostly sold to governments and government agencies

The Post's report said India's ruling political party has never confirmed or denied using Pegasus to spy on journalists and political opponents, but this is far from the first time its critics have been infected with the Pegasus spyware. In 2021, an investigation by several publications that brought the Pegasus project to light found the spyware on the phones of people with a history of opposing and criticizing Modi's government. 

This article originally appeared on Engadget at https://www.engadget.com/apple-reportedly-faces-pressure-in-india-after-sending-out-warnings-of-state-sponsored-hacking-073036597.html?src=rss

UK Supreme Court rules AI can’t be a patent inventor, ‘must be a natural person’

AI may or may not take people's jobs in years to come, but in the meantime, there's one thing they cannot obtain: patents. Dr. Stephen Thaler has spent years trying to get patents for two inventions created by his AI "creativity machine" DABUS. Now, the United Kingdom's Supreme Court has rejected his appeal to approve these patents when listing DABUS as the inventor, Reuters reports

The court's rationale stems from a provision in UK patent law that states, "an inventor must be a natural person." The ruling stipulated that the appeal was unconcerned with whether this should change in the future. "The judgment establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines," Thaler's lawyers said in a statement. 

Thaler first attempt to register the patents — for a food container and a flashing light — was in 2018, as owner of the machine that invented them. However, the UK's Intellectual Property Office said he must list an actual human being on the application, and when he refused, it withdrew his application. Thaler fought the decision in the High Court and then the Court of Appeal, with Lady Justice Elisabeth Laing stating, "Only a person can have rights. A machine cannot." 

Thaler, an American, also submitted the two products to the United States Patent and Trademark Office, which rejected his application. Plus, he previously sued the US Copyright Office (USCO) for not awarding him the copyright for a piece of art DABUS created. The case reached the US District Court of Columbia, with Judge Beryl Howell's ruling explaining, "Human authorship is a bedrock requirement of copyright." Thaler has argued that this provision is unconstitutional, but the US Supreme Court declined to hear his case, ending any further chances to argue his stance. While the UK and US have rejected Thaler's petitions, he has succeeded in countries such as Australia and South Africa. 

This article originally appeared on Engadget at https://www.engadget.com/uk-supreme-court-rules-ai-cant-be-a-patent-inventor-must-be-a-natural-person-131207359.html?src=rss

European Commission agrees to new rules that will protect gig workers rights

Gig workers in the EU will soon get new benefits and protections, making it easier for them to receive employment status. Right now, over 500 digital labor platforms are actively operating in the EU, employing roughly 28 million platform workers. The new rules follow agreements made between the European Parliament and the EU Member States, after policies were first proposed by the European Commission in 2021.

The new rules highlight employment status as a key issue for gig workers, meaning an employed individual can reap the labor and social rights associated with an official worker title. This can include things like a legal minimum wage, the option to engage in collective bargaining, health protections at work, options for paid leave and sick days. Through a recognition of a worker status from the EU, gig workers can also qualify for unemployment benefits.

Given that most gig workers are employed by digital apps, like Uber or Deliveroo, the new directive will require “human oversight of the automated systems” to make sure labor rights and proper working conditions are guaranteed. The workers also have the right to contest any automated decisions by digital employers — such as a termination.

The new rulings will also require employers to inform and consult workers' when there are “algorithmic decisions” that affect them. Employers will be required to report where their gig workers are fulfilling labor-related tasks to ensure the traceability of employees, especially when there are cross-border situations to consider in the EU.

Before the new gig worker protections can formally roll out, there needs to be a final approval of the agreement by the European Parliament and the Council. The stakeholders will have two years to implement the new protections into law. Similar protections for gig workers in the UK were introduced in 2021. Meanwhile, in the US, select cities have rolled out minimum wage rulings and benefits — despite Uber and Lyft’s pushback against such requirements.

This article originally appeared on Engadget at https://www.engadget.com/european-commission-agrees-to-new-rules-that-will-protect-gig-workers-rights-175155671.html?src=rss

Police are using pharmacies to secretly access medical information about members of the public

A Senate Finance Committee inquiry revealed on Tuesday that police departments can get access to private medical information from pharmacies, no warrant needed. While HIPAA may protect some access to personally identifiable health data, it doesn't stop cops, according to a letter from Senator Ron Wyden, Representative Pramila Jayapal and Representative Sara Jacobs to the Department of Health and Human Services. None of the major US pharmacies are doing anything about it either, the members of Congress say. 

"All of the pharmacies surveyed stated that they do not require a warrant prior to sharing pharmacy records with law enforcement agents, unless there is a state law that dictates otherwise," the letter said. "Those pharmacies will turn medical records over in response to a mere subpoena, which often do not have to be reviewed or signed by a judge prior to being issued."

The committee reached out to Amazon, Cigna, CVS Health, The Kroger Company, Optum Rx, Rite Aid Corporation, Walgreens Boots Alliance and Walmart about their practices for sharing medical data with police. While Amazon, Cigna, Optum, Walmart and Walgreen said they have law enforcement requests reviewed by legal professionals before complying, CVS Health, The Kroger Company and Rite Aid Corporation said they ask in-store staff to process the request immediately. 

Engadget asked the pharmacies mentioned in the letter for comment about the claims. CVS said its pharmacy staff are trained to handle these inquiries and its following all applicable laws around the issue. Walgreens said it has a process in place to assess law enforcement requests compliant with those laws, too, and Amazon said that although law enforcement requests are rare, it does notify patients and comply with court orders when applicable. The others either haven't responded or refuse to comment.

The pharmacies mostly blamed the current lack of legislative protections for patient data for their willingness to comply with cop requests. Most of them told the committee that current HIPAA law and other policies let them disclose medical records in response to certain legal requests. That's why the Senate Finance Committee is targeting HHS to strengthen these protections, especially since the 2023 Dobbs decision let states criminalize certain reproductive health decisions. 

Under current HIPAA law, patients have the right to know who is accessing their health information. But individuals have to request the medical record disclosure data, instead of health care professionals being required to share it proactively. "Consequently, few people ever request such information, even though many would obviously be concerned to learn about disclosures of their private medical records to law enforcement agencies," the letter states. The letter also urges pharmacies to change their policies to require a warrant, and publish transparency reports about how data is shared. 

This article originally appeared on Engadget at https://www.engadget.com/police-are-using-pharmacies-to-secretly-access-medical-information-about-members-of-the-public-182009044.html?src=rss

The EU has reached a historic regulatory agreement over AI development

Following a marathon 72-hour debate, European Union legislators Friday have reached a historic deal on its expansive AI Act safety development bill, the broadest-ranging and far-reaching of its kind to date, reports The Washington Post. Details of the deal itself were not immediately available. 

"This legislation will represent a standard, a model, for many other jurisdictions out there," Dragoș Tudorache, a Romanian lawmaker co-leading the AI Act negotiation, told The Washington Post, "which means that we have to have an extra duty of care when we draft it because it is going to be an influence for many others."

The proposed regulations would dictate the ways in which future machine learning models could be developed and distributed within the trade bloc, impacting their use in applications ranging from education to employment to healthcare. AI development would be split between four categories depending on how much societal risk each potentially poses — minimal, limited, high, and banned.

Banned uses would include anything that circumvents the user's will, targets protected social groups or provides real-time biometric tracking (like facial recognition). High risk uses include anything "intended to be used as a safety component of a product,” or which are to be used in defined applications like critical infrastructure, education, legal/judicial matters and employee hiring. Chatbots like ChatGPT, Bard and Bing would fall under the "limited risk" metrics. 

“The European Commission once again has stepped out in a bold fashion to address emerging technology, just like they had done with data privacy through the GDPR,” Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley, told Engadget in 2021. “The proposed regulation is quite interesting in that it is attacking the problem from a risk-based approach,” similar what's been suggested in Canada’s proposed AI regulatory framework.

Ongoing negotiations over the proposed rules had been disrupted in recent weeks by France, Germany and Italy. They were stonewalling talks over the rules guiding how EU member nations could develop Foundational Models, generalized AIs from which more specialized applications can be fine-tuned. OpenAI's GPT-4 is one such foundational model, as ChatGPT, GPTs and other third-party applications are all trained from its base functionality. The trio of countries worried that stringent EU regulations on generative AI models could hamper member nations' efforts to competitively develop them.

The EC had previously addressed the growing challenges of managing emerging AI technologies through an variety of efforts, releasing both the first European Strategy on AI and Coordinated Plan on AI in 2018, followed by the Guidelines for Trustworthy AI in 2019. The following year, the Commission released a White Paper on AI and Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics

"Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being," the European Commission wrote in its draft AI regulations. "Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights."

"At the same time, such rules for artificial intelligence should be balanced, proportionate and not unnecessarily constrain or hinder technological development," it continued. "This is of particular importance because, although artificial intelligence is already present in many aspects of people’s daily lives, it is not possible to anticipate all possible uses or applications thereof that may happen in the future."

More recently, the EC has begun collaborating with industry members on a voluntary basis to craft internal rules that would allow companies and regulators to operate under the same agreed-upon ground rules. "[Google CEO Sundar Pichai] and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline," European Commission (EC) industry chief Thierry Breton said in a May statement. The EC has entered into similar discussions with US-based corporations as well. 

 Developing...

This article originally appeared on Engadget at https://www.engadget.com/the-eu-has-reached-a-historic-regulatory-agreement-over-ai-development-232157689.html?src=rss

TikTok ban in Montana blocked by US judge over free speech rights

Montana's unprecedented state-wide ban of Chinese short-video app, TikTok, was supposed to take effect on January 1, 2024, but as reported by Reuters, US District Judge Donald Molloy issued a preliminary injunction just one month ahead to block said ban. This means that for now, ByteDance and app stores are allowed to continue serving TikTok to users within the Montana state, without being fined $10,000 daily from the start date of the ban.

The judge was quoted saying the ban "oversteps state power and infringes on the constitutional rights of users" — echoing the legal challenge filed by five TikTok creators on the day after the bill was signed back in May, as well as another lawsuit filed by the platform's owner, ByteDance, later on in the same month. It was also questionable as to whether Google and Apple could have effectively enforced such a state-wide ban on their app stores.  

The relevant bill was originally drafted based on claims that this Chinese app would share US users' personal data with the Chinese government, to which ByteDance had long denied since the presidency of Donald Trump. "TikTok US user data is stored in the US, with strict controls on employee access," the company claimed back in August 2020 — and again via a new "transparency" push earlier this year, with reference to "Project Texas" for safeguarding US user data with help from Oracle. 

To date, no other US state had passed a bill to bar TikTok. The outcome of Montana's case may hold the key to this Chinese app's fate across the rest of the country.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-ban-in-montana-blocked-by-us-judge-over-free-speech-rights-011846138.html?src=rss

Bipartisan Senate bill would kill the TSA’s ‘Big Brother’ airport facial recognition

US Senators John Kennedy (R-LA) and Jeff Merkley (D-OR) introduced a bipartisan bill Wednesday to end involuntary facial recognition screening at airports. The Traveler Privacy Protection Act would block the Transportation Security Administration (TSA) from continuing or expanding its facial recognition tech program. It would also require the government agency to explicitly receive congressional permission to renew it, and it would have to dispose of all biometric data within three months.

Senator Merkley described the TSA’s biometric collection practices as the first steps toward an Orwellian nightmare. “The TSA program is a precursor to a full-blown national surveillance state,” Merkley wrote in a news release. “Nothing could be more damaging to our national values of privacy and freedom. No government should be trusted with this power.” Other Senators supporting the bill include Edward J. Markey (D-MA), Roger Marshall (R-KS), Bernie Sanders (I-VT) and Elizabeth Warren (D-MA).

The TSA began testing facial recognition at Los Angeles International Airport (LAX) in 2018. The agency’s pitch to travelers framed it as an exciting new high-tech feature, promising a “biometrically-enabled curb-to-gate passenger experience.” The TSA said this summer it planned to expand the program to over 430 US airports within the next few years.

The program at least technically allows travelers to opt-out, but that process isn’t always transparent in practice. Merkley posted the video above to X in September, demonstrating how agents guided travelers to the facial scanner without mentioning that it’s optional. No signs near the booths said it was optional or explicitly mentioned the gathering of facial data, either. The booths were arranged so that flyers would have difficulty entering their driver’s license or ID (required) without stepping in front of the facial scanner.

Advocacy groups supporting the bill include the ACLU, Electronic Privacy Information Center and Public Citizen. “The privacy risks and discriminatory impact of facial recognition are real, and the government’s use of our faces as IDs poses a serious threat to our democracy,” wrote Jeramie Scott, Senior Counsel and Director of EPIC’s Project on Surveillance Oversight, in Markley’s press release. “The TSA should not be allowed to unilaterally subject millions of travelers to this dangerous technology.”

“Every day, TSA scans thousands of Americans’ faces without their permission and without making it clear that travelers can opt out of the invasive screening,” Sen. Kennedy wrote in a separate news release. “The Traveler Privacy Protection Act would protect every American from Big Brother’s intrusion by ending the facial recognition program.”

This article originally appeared on Engadget at https://www.engadget.com/bipartisan-senate-bill-would-kill-the-tsas-big-brother-airport-facial-recognition-191010937.html?src=rss