Binance founder Changpeng Zhao sentenced to four months in prison

A federal judge has sentenced Binance founder Changpeng Zhao (often known as “CZ”) to four months in prison, as first reported by The New York Times. Prosecutors had recommended three years. Zhao pleaded guilty in November to violating the Bank Secrecy Act by failing to set up an anti-money-laundering program.

The DOJ accused Zhao of allowing criminal activity to flourish on the crypto exchange. “Binance turned a blind eye to its legal obligations in the pursuit of profit. Its willful failures allowed money to flow to terrorists, cybercriminals, and child abusers through its platform,” Treasury Secretary Janet Yellen said in November.

The government accused Binance of refusing to comply with American sanctions and failing to report suspicious transactions related to drugs and child sexual abuse materials. Prosecutors said in court that Zhao had told Binance employees it was “better to ask for forgiveness than permission” while bragging that if Binance had obeyed the law, it wouldn’t be “as big as we are today.”

Under the plea deal’s terms, Binance agreed to forfeit $2.5 billion and pay a $1.8 billion fine. Zhao personally paid $50 million as part of the settlement.

Although the charges differed, Zhao’s sentence is dramatically shorter than the 25 years fellow crypto figurehead Sam Bankman-Fried received in March. SBF, as he’s often known, was convicted on seven counts of fraud and conspiracy for his role at the helm of the crypto platform FTX.

Zhao played an integral role in Bankman-Fried’s downfall — and the crypto industry’s broader decline in the last 18 months. The Binance founder tweeted in November 2022 that his company would liquidate its holdings in FTX’s de facto token. He said “recent revelations that have came[sic] to light” while citing “ethical concerns” and “regulatory risks.” The posts not only crushed FTX but the crypto world at large. (They likely helped attract the government’s attention as well.) When FTX’s wells dried up following the platform’s rapid collapse, Zhao briefly agreed to buy the company but quickly backed out.

Prosecutors said Zhao’s crime carried a standard federal sentence of 12 to 18 months but argued for a three-year term, describing his crimes as being “on an unprecedented scale.” But Judge Richard A. Jones saw it differently, sentencing him to a measly one-twelfth of the government’s suggested term.

“This wasn’t a mistake — it wasn’t a regulatory oops,” Kevin Mosley, a DOJ lawyer, reportedly said in court on Tuesday. “Breaking U.S. law was not incidental to his plan to make as much money as possible. Violating the law was integral to that endeavor.”

This article originally appeared on Engadget at https://www.engadget.com/binance-founder-changpeng-zhao-sentenced-to-four-months-in-prison-205550299.html?src=rss

Binance founder Changpeng Zhao sentenced to four months in prison

A federal judge has sentenced Binance founder Changpeng Zhao (often known as “CZ”) to four months in prison, as first reported by The New York Times. Prosecutors had recommended three years. Zhao pleaded guilty in November to violating the Bank Secrecy Act by failing to set up an anti-money-laundering program.

The DOJ accused Zhao of allowing criminal activity to flourish on the crypto exchange. “Binance turned a blind eye to its legal obligations in the pursuit of profit. Its willful failures allowed money to flow to terrorists, cybercriminals, and child abusers through its platform,” Treasury Secretary Janet Yellen said in November.

The government accused Binance of refusing to comply with American sanctions and failing to report suspicious transactions related to drugs and child sexual abuse materials. Prosecutors said in court that Zhao had told Binance employees it was “better to ask for forgiveness than permission” while bragging that if Binance had obeyed the law, it wouldn’t be “as big as we are today.”

Under the plea deal’s terms, Binance agreed to forfeit $2.5 billion and pay a $1.8 billion fine. Zhao personally paid $50 million as part of the settlement.

Although the charges differed, Zhao’s sentence is dramatically shorter than the 25 years fellow crypto figurehead Sam Bankman-Fried received in March. SBF, as he’s often known, was convicted on seven counts of fraud and conspiracy for his role at the helm of the crypto platform FTX.

Zhao played an integral role in Bankman-Fried’s downfall — and the crypto industry’s broader decline in the last 18 months. The Binance founder tweeted in November 2022 that his company would liquidate its holdings in FTX’s de facto token. He said “recent revelations that have came[sic] to light” while citing “ethical concerns” and “regulatory risks.” The posts not only crushed FTX but the crypto world at large. (They likely helped attract the government’s attention as well.) When FTX’s wells dried up following the platform’s rapid collapse, Zhao briefly agreed to buy the company but quickly backed out.

Prosecutors said Zhao’s crime carried a standard federal sentence of 12 to 18 months but argued for a three-year term, describing his crimes as being “on an unprecedented scale.” But Judge Richard A. Jones saw it differently, sentencing him to a measly one-twelfth of the government’s suggested term.

“This wasn’t a mistake — it wasn’t a regulatory oops,” Kevin Mosley, a DOJ lawyer, reportedly said in court on Tuesday. “Breaking U.S. law was not incidental to his plan to make as much money as possible. Violating the law was integral to that endeavor.”

This article originally appeared on Engadget at https://www.engadget.com/binance-founder-changpeng-zhao-sentenced-to-four-months-in-prison-205550299.html?src=rss

The US Supreme Court rejects Elon Musk’s appeal in ‘funding secured’ tweet ruling

On Monday, the US Supreme Court dismissed Elon Musk’s appeal about a 2018 SEC settlement regarding his infamous “funding secured” tweet. Ars Technica reports that the conservative-majority court took a break from weighing whether US Presidents should be above the law to pass on Musk’s attempt to throw out the agreement, which required him to pay fines, step down from Tesla’s board and have his tweets pre-screened by a lawyer.

The justices denied Musk’s petition without commenting. Their unwillingness to take up the billionaire’s appeal leaves intact an appeals court ruling from a year ago that smacked down the Tesla founder’s claims of victimhood.

The saga began in 2018 when Musk tweeted, “Am considering taking Tesla private at $420. Funding secured.” He also posted, “Investor support is confirmed. Only reason why this is not certain is that it’s contingent on a shareholder vote.” Tesla’s stock rose by more than six percent.

There was only one tiny problem: The funding wasn’t secured, and the SEC takes false statements that affect investors very seriously. The SEC said, “Musk had not even discussed, much less confirmed, key deal terms, including price, with any potential funding source” and that he “knew that he had not satisfied numerous additional contingencies.” The government agency claimed the post caused “significant confusion and disruption in the market for Tesla’s stock.”

The SEC settlement hit his wallet hard, requiring Musk and Tesla to each pay $20 million in penalties. He also had to step down from his board chairman role at the automaker and have a Tesla attorney screen any investor-related tweets before posting. Of course, Musk later bought Twitter and changed its name to X. But at least that’s going splendidly!

His appeal said the settlement forced him to “waive his First Amendment rights to speak on matters ranging far beyond the charged violations.” Musk, who currently has an estimated net worth of $185 billion, claimed he was a victim of “economic duress” when agreeing to the settlement, which he described as a tactic to “muzzle and harass” him and his company.

The 2nd Circuit appeals court, whose ruling will now be the final word on the matter, shot down Musk’s arguments. “Parties entering into consent decrees may voluntarily waive their First Amendment and other rights,” they said. The appeals court saw “no evidence to support Musk’s contention that the SEC has used the consent decree to conduct bad-faith, harassing investigations of his protected speech.”

This article originally appeared on Engadget at https://www.engadget.com/the-us-supreme-court-rejects-elon-musks-appeal-in-funding-secured-tweet-ruling-183554065.html?src=rss

The US Supreme Court rejects Elon Musk’s appeal in ‘funding secured’ tweet ruling

On Monday, the US Supreme Court dismissed Elon Musk’s appeal about a 2018 SEC settlement regarding his infamous “funding secured” tweet. Ars Technica reports that the conservative-majority court took a break from weighing whether US Presidents should be above the law to pass on Musk’s attempt to throw out the agreement, which required him to pay fines, step down from Tesla’s board and have his tweets pre-screened by a lawyer.

The justices denied Musk’s petition without commenting. Their unwillingness to take up the billionaire’s appeal leaves intact an appeals court ruling from a year ago that smacked down the Tesla founder’s claims of victimhood.

The saga began in 2018 when Musk tweeted, “Am considering taking Tesla private at $420. Funding secured.” He also posted, “Investor support is confirmed. Only reason why this is not certain is that it’s contingent on a shareholder vote.” Tesla’s stock rose by more than six percent.

There was only one tiny problem: The funding wasn’t secured, and the SEC takes false statements that affect investors very seriously. The SEC said, “Musk had not even discussed, much less confirmed, key deal terms, including price, with any potential funding source” and that he “knew that he had not satisfied numerous additional contingencies.” The government agency claimed the post caused “significant confusion and disruption in the market for Tesla’s stock.”

The SEC settlement hit his wallet hard, requiring Musk and Tesla to each pay $20 million in penalties. He also had to step down from his board chairman role at the automaker and have a Tesla attorney screen any investor-related tweets before posting. Of course, Musk later bought Twitter and changed its name to X. But at least that’s going splendidly!

His appeal said the settlement forced him to “waive his First Amendment rights to speak on matters ranging far beyond the charged violations.” Musk, who currently has an estimated net worth of $185 billion, claimed he was a victim of “economic duress” when agreeing to the settlement, which he described as a tactic to “muzzle and harass” him and his company.

The 2nd Circuit appeals court, whose ruling will now be the final word on the matter, shot down Musk’s arguments. “Parties entering into consent decrees may voluntarily waive their First Amendment and other rights,” they said. The appeals court saw “no evidence to support Musk’s contention that the SEC has used the consent decree to conduct bad-faith, harassing investigations of his protected speech.”

This article originally appeared on Engadget at https://www.engadget.com/the-us-supreme-court-rejects-elon-musks-appeal-in-funding-secured-tweet-ruling-183554065.html?src=rss

OpenAI hit with another privacy complaint over ChatGPT’s love of making stuff up

OpenAI has been hit with a privacy complaint in Austria by an advocacy group called NOYB, which stands for None Of Your Business. The complaint alleges that the company’s ChatGPT bot repeatedly provided incorrect information about a real individual (who for privacy reasons is not named in the complaint), as reported by Reuters. This may breach EU privacy rules.

The chatbot allegedly spat out incorrect birthdate information for the individual, instead of just saying it didn’t know the answer to the query. Like politicians, AI chatbots like to confidently make stuff up and hope we don’t notice. This phenomenon is called a hallucination. However, it’s one thing when these bots make up ingredients for a recipe and another thing entirely when they invent stuff about real people.

The complaint also indicates that OpenAI refused to help delete the false information, responding that it was technically impossible to make that kind of change. The company did offer to filter or block the data on certain prompts. OpenAI’s privacy policy says that if users notice the AI chatbot has generated “factually inaccurate information” about them that they can submit a “correction request”, but the company says that it “may not be able to correct the inaccuracy in every instance”, as reported by TechCrunch.

This is bigger than just one complaint, as the chatbot’s tendency toward making stuff up could run afoul of the region’s General Data Protection Regulation (GDPR), which governs how personal data can be used and processed. EU residents have rights regarding personal information, including a right to have false data corrected. Failure to comply with these regulations can accrue serious financial penalties, up to four percent of global annual turnover in some cases. Regulators can also order changes to how information is processed.

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals,” Maartje de Graaf, NOYB data protection lawyer, said in a statement. “If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The complaint also brought up concerns regarding transparency on the part of OpenAI, suggesting that the company doesn’t offer information regarding where the data it generates on individuals comes from or if this data is stored indefinitely. This is of particular importance when considering data pertaining to private individuals.

Again, this is a complaint by an advocacy group and EU regulators have yet to comment one way or the other. However, OpenAI has acknowledged in the past that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” NOYB has approached the Austrian Data Protection Authority and asked the organization to investigate the issue.

The company is facing a similar complaint in Poland, in which the local data protection authority began investigating ChatGPT after a researcher was unable to get OpenAI’s help with correcting false personal information. That complaint accuses OpenAI of several breaches of the EU’s GDPR, with regard to transparency, data access rights and privacy.

There’s also Italy. The Italian data protection authority conducted an investigation into ChatGPT and OpenAI which concluded by saying it believes the company has violated the GDPR in various ways. This includes ChatGPT’s tendency to make up fake stuff about people. The chatbot was actually banned in Italy before OpenAI made certain changes to the software, like new warnings for users and the option to opt-out of having chats be used to train the algorithms. Despite no longer being banned, the Italian investigation into ChatGPT continues.

OpenAI hasn’t responded to this latest complaint, but did respond to the regulatory salvo issued by Italy’s DPA. “We want our AI to learn about the world, not about private individuals,” the company wrote. “We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-with-another-privacy-complaint-over-chatgpts-love-of-making-stuff-up-162250335.html?src=rss

OpenAI hit with another privacy complaint over ChatGPT’s love of making stuff up

OpenAI has been hit with a privacy complaint in Austria by an advocacy group called NOYB, which stands for None Of Your Business. The complaint alleges that the company’s ChatGPT bot repeatedly provided incorrect information about a real individual (who for privacy reasons is not named in the complaint), as reported by Reuters. This may breach EU privacy rules.

The chatbot allegedly spat out incorrect birthdate information for the individual, instead of just saying it didn’t know the answer to the query. Like politicians, AI chatbots like to confidently make stuff up and hope we don’t notice. This phenomenon is called a hallucination. However, it’s one thing when these bots make up ingredients for a recipe and another thing entirely when they invent stuff about real people.

The complaint also indicates that OpenAI refused to help delete the false information, responding that it was technically impossible to make that kind of change. The company did offer to filter or block the data on certain prompts. OpenAI’s privacy policy says that if users notice the AI chatbot has generated “factually inaccurate information” about them that they can submit a “correction request”, but the company says that it “may not be able to correct the inaccuracy in every instance”, as reported by TechCrunch.

This is bigger than just one complaint, as the chatbot’s tendency toward making stuff up could run afoul of the region’s General Data Protection Regulation (GDPR), which governs how personal data can be used and processed. EU residents have rights regarding personal information, including a right to have false data corrected. Failure to comply with these regulations can accrue serious financial penalties, up to four percent of global annual turnover in some cases. Regulators can also order changes to how information is processed.

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals,” Maartje de Graaf, NOYB data protection lawyer, said in a statement. “If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The complaint also brought up concerns regarding transparency on the part of OpenAI, suggesting that the company doesn’t offer information regarding where the data it generates on individuals comes from or if this data is stored indefinitely. This is of particular importance when considering data pertaining to private individuals.

Again, this is a complaint by an advocacy group and EU regulators have yet to comment one way or the other. However, OpenAI has acknowledged in the past that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” NOYB has approached the Austrian Data Protection Authority and asked the organization to investigate the issue.

The company is facing a similar complaint in Poland, in which the local data protection authority began investigating ChatGPT after a researcher was unable to get OpenAI’s help with correcting false personal information. That complaint accuses OpenAI of several breaches of the EU’s GDPR, with regard to transparency, data access rights and privacy.

There’s also Italy. The Italian data protection authority conducted an investigation into ChatGPT and OpenAI which concluded by saying it believes the company has violated the GDPR in various ways. This includes ChatGPT’s tendency to make up fake stuff about people. The chatbot was actually banned in Italy before OpenAI made certain changes to the software, like new warnings for users and the option to opt-out of having chats be used to train the algorithms. Despite no longer being banned, the Italian investigation into ChatGPT continues.

OpenAI hasn’t responded to this latest complaint, but did respond to the regulatory salvo issued by Italy’s DPA. “We want our AI to learn about the world, not about private individuals,” the company wrote. “We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-with-another-privacy-complaint-over-chatgpts-love-of-making-stuff-up-162250335.html?src=rss

Google asks court to reject the DOJ’s lawsuit that accuses it of monopolizing ad tech

Google filed a motion on Friday in a Virginia federal court asking for the Department of Justice’s antitrust lawsuit against it to be thrown away. The DOJ sued Google in January 2023, accusing the company of monopolizing digital advertising technologies through “anticompetitive and exclusionary conduct.” Per Bloomberg, Google is now seeking summary judgment to avoid the case going to trial in September as planned.

Attorney General Merrick B. Garland said at the time the lawsuit was first announced that Google “has used anticompetitive, exclusionary, and unlawful conduct to eliminate or severely diminish any threat to its dominance over digital advertising technologies.” The lawsuit alleges that Google controls digital advertising tools to such an extent that it “pockets on average more than 30 percent of the advertising dollars that flow through its digital advertising technology products,” according to a press release from the agency last year.

Google now argues that that the DOJ hasn’t shown that the company controls at least 70 percent of the market, which some previous cases have used as the threshold for qualifying as a monopoly, and that the agency “made up markets specifically for this case,” according to Bloomberg, excluding its major competitors like social media platforms. The company also claims the DOJ’s case goes “beyond the boundaries of antitrust law,” Reuters reports.

This article originally appeared on Engadget at https://www.engadget.com/google-asks-court-to-reject-the-dojs-lawsuit-that-accuses-it-of-monopolizing-ad-tech-183830791.html?src=rss

Google asks court to reject the DOJ’s lawsuit that accuses it of monopolizing ad tech

Google filed a motion on Friday in a Virginia federal court asking for the Department of Justice’s antitrust lawsuit against it to be thrown away. The DOJ sued Google in January 2023, accusing the company of monopolizing digital advertising technologies through “anticompetitive and exclusionary conduct.” Per Bloomberg, Google is now seeking summary judgment to avoid the case going to trial in September as planned.

Attorney General Merrick B. Garland said at the time the lawsuit was first announced that Google “has used anticompetitive, exclusionary, and unlawful conduct to eliminate or severely diminish any threat to its dominance over digital advertising technologies.” The lawsuit alleges that Google controls digital advertising tools to such an extent that it “pockets on average more than 30 percent of the advertising dollars that flow through its digital advertising technology products,” according to a press release from the agency last year.

Google now argues that that the DOJ hasn’t shown that the company controls at least 70 percent of the market, which some previous cases have used as the threshold for qualifying as a monopoly, and that the agency “made up markets specifically for this case,” according to Bloomberg, excluding its major competitors like social media platforms. The company also claims the DOJ’s case goes “beyond the boundaries of antitrust law,” Reuters reports.

This article originally appeared on Engadget at https://www.engadget.com/google-asks-court-to-reject-the-dojs-lawsuit-that-accuses-it-of-monopolizing-ad-tech-183830791.html?src=rss

Grindr sued for allegedly sharing users’ HIV status and other info with ad companies

Grindr has been sued for allegedly sharing personal information with advertising companies without users' consent. A lawsuit filed in London claims that the data included HIV statuses and test dates, ethnicity and sexual orientation, Bloomberg reports.

According to the class action-style suit, the alleged data sharing involved adtech companies Localytics and Apptimize. Grindr is said to have supplied the companies with user info before April 2018 and then between May 2018 and April 2020. Engadget has asked Grindr for comment.

In April 2018, Grindr admitted it had shared HIV data with Apptimize and Localytics following an investigation by BuzzFeed News and Norwegian non-profit SINTEF. It said it would stop the practice.

This isn't the only time Grindr has been accused of sharing users' personal information. A 2022 report from The Wall Street Journal indicated that precise location data on Grindr users was up for sale for at least three years. In addition, Norway's data protection agency fined Grindr $6 million in 2021 for violating the European Union's General Data Protection Regulation. The agency said Grindr had unlawfully shared "personal data with third parties for marketing purposes."

This article originally appeared on Engadget at https://www.engadget.com/grindr-sued-for-allegedly-sharing-users-hiv-status-and-other-info-with-ad-companies-141748725.html?src=rss

Google fired 28 workers who protested Israeli government cloud contract

Google has fired 28 employees involved in protests against the company's "Project Nimbus" cloud contract with the Israeli government, according to an internal memo seen by The Verge. That follows the arrest and suspension of nine employees on April 16 and a previous firing related to the same project last month. 

Some of the fired workers were forcibly removed after occupying the office of Google Cloud CEO Thomas Kurian. Google head of global security Chris Rackow said that the company "will not tolerate" such incidences and warned that the company could take further action. 

"If you’re one of the few who are tempted to think we’re going to overlook conduct that violates our policies, think again," he told employees in a letter. "The company takes this extremely seriously, and we will continue to apply our longstanding policies to take action against disruptive behavior — up to and including termination."

Behavior like this has no place in our workplace and we will not tolerate it. It clearly violates multiple policies that all employees must adhere to — including our Code of Conduct and Policy on Harassment, Discrimination, Retaliation, Standards of Conduct, and Workplace Concerns.

However, workers in the "No Tech for Apartheid" group organizing the protests called the dismissals "a flagrant act of retaliation." It added that the Google saying protests largely involve people not working at the company is "insulting," adding that the push to drop Project Nimbus is supported by "thousands" of their colleagues. 

"In the three years that we have been organizing against Project Nimbus, we have yet to hear from a single executive about our concerns,” it wrote in a Medium post. "Google workers have the right to peacefully protest about terms and conditions of our labor. These firings were clearly retaliatory.”

This article originally appeared on Engadget at https://www.engadget.com/google-fired-28-workers-who-protested-israeli-government-cloud-contract-084444878.html?src=rss