An AI-generated ad left thousands of Dubliners waiting for a Halloween parade that never came

Thousands of people took to the streets in Dublin to attend a Halloween parade that never came, according to reporting by The Independent. Why did they do such a thing? It was all due to an AI-generated ad that promoted the fake event.

The My Spirit Halloween website advertised the completely fabricated Macnas Halloween Parade, which was supposed to take place from 7PM to 9PM on the streets of Dublin. News of the parade quickly spread online, and it even received a news ranking on Google.

So, yeah, thousands of people put on their Halloween costumes and stood on the street for a couple of hours, waiting for an event that would never happen. They even left room in the street for the parade to pass by. That’s thoughtful, but also a bit sad.

The situation forced Ireland’s police force to put out a message to urge would-be parade-goers to “disperse safely.” A spokesperson for the organization said that “contrary to information being circulated online, no Halloween parade is scheduled to take place in Dublin city center this evening or tonight.”

Local city councilor Janice Boylan expressed disappointment over the whole situation.“Everyone is trying to have a fun and safe Halloween. Having a parade to go to sounded really good,” she said. “I know an awful lot of people turned up. It’s a terrible pity.”

The My Spirit Halloween listing has been taken down, but there remains the question as to why it turned up in the first place. It’s worth noting that this is a different entity from the popular seasonal retailer Spirit Halloween.

The My Spirit Halloween website looks to come from Pakistan and posts all kinds of AI-generated content like the ad that caused all of this trouble, according to Yahoo News. This particular post happened to get picked up by TikTok and Google, causing the rapid dissemination of the, literal, fake news.

It’s pretty wild, right? An AI-generated post likely created in Pakistan caused thousands of actual people to take to the streets halfway across the globe. What is that curse again? Oh yeah. “May you live in interesting times.”

This article originally appeared on Engadget at https://www.engadget.com/ai/an-ai-generated-ad-left-thousands-of-dubliners-waiting-for-a-halloween-parade-that-never-came-162550781.html?src=rss

An AI-generated ad left thousands of Dubliners waiting for a Halloween parade that never came

Thousands of people took to the streets in Dublin to attend a Halloween parade that never came, according to reporting by The Independent. Why did they do such a thing? It was all due to an AI-generated ad that promoted the fake event.

The My Spirit Halloween website advertised the completely fabricated Macnas Halloween Parade, which was supposed to take place from 7PM to 9PM on the streets of Dublin. News of the parade quickly spread online, and it even received a news ranking on Google.

So, yeah, thousands of people put on their Halloween costumes and stood on the street for a couple of hours, waiting for an event that would never happen. They even left room in the street for the parade to pass by. That’s thoughtful, but also a bit sad.

The situation forced Ireland’s police force to put out a message to urge would-be parade-goers to “disperse safely.” A spokesperson for the organization said that “contrary to information being circulated online, no Halloween parade is scheduled to take place in Dublin city center this evening or tonight.”

Local city councilor Janice Boylan expressed disappointment over the whole situation.“Everyone is trying to have a fun and safe Halloween. Having a parade to go to sounded really good,” she said. “I know an awful lot of people turned up. It’s a terrible pity.”

The My Spirit Halloween listing has been taken down, but there remains the question as to why it turned up in the first place. It’s worth noting that this is a different entity from the popular seasonal retailer Spirit Halloween.

The My Spirit Halloween website looks to come from Pakistan and posts all kinds of AI-generated content like the ad that caused all of this trouble, according to Yahoo News. This particular post happened to get picked up by TikTok and Google, causing the rapid dissemination of the, literal, fake news.

It’s pretty wild, right? An AI-generated post likely created in Pakistan caused thousands of actual people to take to the streets halfway across the globe. What is that curse again? Oh yeah. “May you live in interesting times.”

This article originally appeared on Engadget at https://www.engadget.com/ai/an-ai-generated-ad-left-thousands-of-dubliners-waiting-for-a-halloween-parade-that-never-came-162550781.html?src=rss

UK man gets 18 years in prison for using AI to generate CSAM

A UK man who used AI to create child sexual abuse material (CSAM) has been sentenced to 18 years in prison, according to The Guardian. Hugh Nelson, 27, created the images by using photographs of real children, which were then manipulated by AI. Nelson was convicted of 16 child sexual abuse offenses back in August, after a lengthy police investigation. This was the first prosecution of its kind in the UK.

Nelson used modeling software called Daz 3D to manufacture the loathsome images. The program has a suite of AI tools, which he used to transform regular photos of children into CSAM. Greater Manchester police said that he sold these images online and was even commissioned in several cases to create specific items of CSAM via photographs of real kids. Police say that Nelson made around $6,500 by selling images online.

He was caught when trying to sell images to an undercover cop in a chatroom. “I’ve done beatings, smotherings, hangings, drownings, beheadings, necro, beast, the list goes on,” Nelson said to the cop to entice a sale. This is according to a transcript of a conversation provided by the prosecution.

It’s worth noting that Daz 3D doesn’t create deepfakes, in which one face is swapped onto another body. Nelson created actual 3D renders by feeding the photos to the AI algorithm.

At sentencing, the judge called the images “harrowing and sickening” and addressed Nelson specifically, saying “there seems to be no limit to the depths of depravity exhibited in the images that you were prepared to create and exhibit to others.” He also said that it was “impossible to know” if children had been abused as a result of the images. Police searches of Nelson’s devices did find a series of text messages in which he encouraged people to sexually abuse children under 13. These suspects and potential victims are allegedly located throughout the world, including the US.

The United States is, of course, not immune from this horrifying trend. A soldier was arrested back in August for allegedly using AI to generate CSAM. A Wisconsin man faces 70 years in prison for allegedly creating over 13,000 AI-generated images depicting CSAM. The world’s leading AI companies have signed a pledge to help stop this type of software from being used to generate child sexual abuse material.

This article originally appeared on Engadget at https://www.engadget.com/ai/uk-man-gets-18-years-in-prison-for-using-ai-to-generate-csam-154037476.html?src=rss

UK man gets 18 years in prison for using AI to generate CSAM

A UK man who used AI to create child sexual abuse material (CSAM) has been sentenced to 18 years in prison, according to The Guardian. Hugh Nelson, 27, created the images by using photographs of real children, which were then manipulated by AI. Nelson was convicted of 16 child sexual abuse offenses back in August, after a lengthy police investigation. This was the first prosecution of its kind in the UK.

Nelson used modeling software called Daz 3D to manufacture the loathsome images. The program has a suite of AI tools, which he used to transform regular photos of children into CSAM. Greater Manchester police said that he sold these images online and was even commissioned in several cases to create specific items of CSAM via photographs of real kids. Police say that Nelson made around $6,500 by selling images online.

He was caught when trying to sell images to an undercover cop in a chatroom. “I’ve done beatings, smotherings, hangings, drownings, beheadings, necro, beast, the list goes on,” Nelson said to the cop to entice a sale. This is according to a transcript of a conversation provided by the prosecution.

It’s worth noting that Daz 3D doesn’t create deepfakes, in which one face is swapped onto another body. Nelson created actual 3D renders by feeding the photos to the AI algorithm.

At sentencing, the judge called the images “harrowing and sickening” and addressed Nelson specifically, saying “there seems to be no limit to the depths of depravity exhibited in the images that you were prepared to create and exhibit to others.” He also said that it was “impossible to know” if children had been abused as a result of the images. Police searches of Nelson’s devices did find a series of text messages in which he encouraged people to sexually abuse children under 13. These suspects and potential victims are allegedly located throughout the world, including the US.

The United States is, of course, not immune from this horrifying trend. A soldier was arrested back in August for allegedly using AI to generate CSAM. A Wisconsin man faces 70 years in prison for allegedly creating over 13,000 AI-generated images depicting CSAM. The world’s leading AI companies have signed a pledge to help stop this type of software from being used to generate child sexual abuse material.

This article originally appeared on Engadget at https://www.engadget.com/ai/uk-man-gets-18-years-in-prison-for-using-ai-to-generate-csam-154037476.html?src=rss

A Scottish children’s hospital now has a gamer-in-residence to play games with kids

A children's hospital in Scotland now has a gamer-in-residence in what's said to be a first in the UK and Ireland. Steven Mair, the first person to take on the full-time role, will play games with kids at the Royal Hospital for Children in Glasgow.

As with other gaming-related charitable efforts at children's hospitals, the aim is to help patients relax and minimize feelings of boredom and isolation, while offering them a sense of escapism. Studies have indicated that playing games can help reduce the procedural pain and anxiety of pediatric patients, as well as their caregivers' anxiety.

Mair is also organizing gaming events at the facility, fundraising for new gaming equipment and managing gaming volunteers for the Glasgow Children’s Hospital Charity. The charity established the gamer-in-residence position with the help of partners Devolver Digital and Neonhive after raising over £100,000 ($129,000) last year through efforts such as a Scottish Games Sale on Steam for a campaign called Games for the Weans ("weans" is a Scottish word for "kids"). Meanwhile, a $12,000 donation from Child's Play earlier this year will help fund the replacement of older Xbox 360 and PlayStation 3 systems with hospital-adapted consoles.

“Children in Jace’s ward can have long stays and intense treatment plans. This can include physical pain and a lot of new emotions. Often, parents can feel helpless at times. For my son Jace in particular, he is an experienced gamer. His blood disorder prevented him from going outside or starting school," Catherine Reid, the mother of seven-year-old Jace, said in a statement that I could barely get all the way through without welling up.

​“When the gamer-in-residence came round to play Mario on the Nintendo, he immediately lit up and smiled. It was an instant energy boost for him mentally and physically. In reality, I think often what kids want is some quality time and gaming with new friends.”

This is a fantastic idea. Hospital stays can be tough for anyone, but especially so for kids and their families. You can help support the gamer-in-residence program and other charitable efforts that help young hospital patients through gaming by donating to the likes of the Glasgow Children’s Hospital Charity and Child's Play.

Meanwhile, Extra Life's Game Day, an event during which gamers and communities raise funds for children's hospitals, takes place on November 2. You can sign up to take part or make a donation over at the Extra Life website.

This article originally appeared on Engadget at https://www.engadget.com/gaming/a-scottish-childrens-hospital-now-has-a-gamer-in-residence-to-play-games-with-kids-182303354.html?src=rss

The FBI arrested an Alabama man for allegedly helping hack the SEC’s X account

A 25-year-old Alabama man has been arrested by the FBI for his alleged role in the takeover of the Securities and Exchange Commission's X account earlier this year. The hack resulted in a rogue tweet that falsely claimed bitcoin ETFs had been approved by the regulator, which temporarily juiced bitcoin prices.

Now, the FBI has identified Eric Council Jr. as one of the people allegedly behind the exploit. Council was charged with conspiracy to commit aggravated identity theft and access device fraud, according to the Justice Department. While the SEC had previously confirmed that its X account was compromised via a SIM swap attack, the indictment offers new details about how it was allegedly carried out.

According to the indictment, Council worked with co-conspirators who he coordinated with over SMS and encrypted messaging apps. These unnamed individuals allegedly sent him the personal information of someone, identified only as “C.L,” who had access to the SEC X account. Council then printed a fake ID using the information and used it to buy a new SIM in their name, as well as a new iPhone, according to the DoJ. He then coordinated with the other individuals so they could access the SEC’s X account, change its settings and send the rogue tweet, the indictment says. 

The tweet from @SECGov, which came one day ahead of the SEC’s actual approval of 11 spot bitcoin ETFS, caused bitcoin prices to temporarily spike by more than $1,000. It also raised questions about why the high profile account wasn’t secured with multi-factor authentication at the time of the attack. “Today’s arrest demonstrates our commitment to holding bad actors accountable for undermining the integrity of the financial markets,” SEC Inspector General Jeffrey said in a statement.

The indictment further notes that Council allegedly performed some seemingly incriminating searches on his personal computer. Among his searchers were: "SECGOV hack," "telegram sim swap," "how can I know for sure if I am being investigated by the FBI," "What are the signs that you are under investigation by law enforcement or the FBI even if you have not been contacted by them," "what are some signs that the FBl is after you,” “Verizon store list," "federal identity theft statute," and "how long does it take to delete telegram account," the indictment says.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/the-fbi-arrested-an-alabama-man-for-allegedly-helping-hack-the-secs-x-account-193508179.html?src=rss

Two Sudanese brothers accused of launching a dangerous series of DDoS attacks

Newly unsealed grand jury documents revealed that two Sudanese nationals allegedly attempted to launch thousands of distributed denial of services (DDoS) attacks on systems across the world. The documents allege that these hacks aimed to cause serious financial and technical harm to government entities and companies and even physical harm in some cases.

The US Department of Justice (DoJ) unsealed charges against Ahmed Salah Yousif Omer and Alaa Salah Yusuuf Omer that resulted in federal grand jury indictments. The two are allegedly connected to more than 35,000 DDoS attacks against hundreds of organizations, websites and networks as part of a “hacktivism” scheme as part of the cybercrime group Anonymous Sudan and a for-profit cyberattack service.

Even though Anonymous Sudan claimed to be an activist group, the pair also held some companies and entity’s systems for ransom for rates as high as $1,700 per month.

Both face indictments for their role in the coordinated cyberattacks including one count each of conspiracy to damage protected computers. Ahmed also faces three additional counts of damaging protected computers and could receive a statutory maximum sentence of life in federal prison, according to court records filed last June in the US Central District Court of California.

The brothers’ activities date back to early 2023. The two used a distributed cloud attack tool (DCAT) referred to as “Skynet Botnet” in order to “conduct destructive DDoS attacks and publicly claim credit for them,” according to a DoJ statement. Ahmed posted a message on Anonymous Sudan’s Telegram channel, “The United States must be prepared, it will be a very big attack, like what we did in Israel, we will do in the United States ‘soon.’”

One of the indictments listed 145 “overt acts” on organizations and entities in the US, the European Union, Israel, Sudan and the United Arab Emirates (UAE). The Skynet Botnet attacks attempted to disrupt services and networks in airports, software networks and companies including Cloudflare, X, Paypal and Microsoft that caused outages for Outlook and OneDrive in June of last year. The attacks also targeted state and federal government agencies and websites including the Federal Bureau of Investigation (FBI), the Pentagon and the DoJ and even hospitals including one major attack on Cedars-Sinai Hospital in Los Angeles causing a slowdown of health care services as patients were diverted to other hospitals. The hospital attack led to the hacking charges against Ahmed that carry potential life sentences.

“3 hours+ and still holding,” Ahmed posted on Telegram in February, “they're trying desperately to fix it but to no avail Bomb our hospitals in Gaza, we shut down yours too, eye for eye...”

FBI special agents gathered evidence of the pair’s illegal activities including logs showing that they sold access to Skynet Botnet to more than 100 customers to carry out attacks against various victims who worked with investigators including Cloudflare, Crowdstrike, Digital Ocean, Google, PayPal and others.

Several Amazon Web Services (AWS) clients were among Anonymous Sudan’s victims as part of the hacking-for-hire scheme, according to court records and an AWS statement. AWS security teams worked with FBI cybercrime investigators to track the attacks back to “an array of cloud-based servers," many of which were based in the US. The discovery helped the FBI determine that the Skynet Botnet attacks were coming from a DCAT instead of a botnet that forwarded the DDoS to its victims through cloud-based servers and open proxy resolvers.

Perhaps the group’s most brazen and dangerous attack took place in April of 2023 that targeted Israel’s rocket alert system called Red Alert. The mobile app provides real time updates for missile attacks and security threats. The DDoS attacks attempted to infiltrate some of Red Alert’s Internet domains. Ahmed claimed responsibility for the Red Alert attacks on Telegram along with similar DDoS strikes on Israeli utilities and the Jerusalem Post news website.

“This group’s attacks were callous and brazen — the defendants went so far as to attack hospitals providing emergency and urgent care to patients,” US Attorney Martin Estrada said in a released statement. “My office is committed to safeguarding our nation’s infrastructure and the people who use it, and we will hold cyber criminals accountable for the grave harm they cause.”

Update, October 16, 7:25PM ET: This article was modified after publish to make clear that AWS clients, rather than AWS, were the target of Anonymous Sudan.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/two-sudanese-brothers-accused-of-launching-a-dangerous-series-of-ddos-attacks-215638291.html?src=rss

NLRB accuses Apple of illegally restricting employee Slack and social media use

The National Labor Relations Board has accused Apple of infringing on its employees’ rights to advocate for better working conditions. In a complaint spotted by Reuters, the agency alleges Apple illegally fired an employee who had used Slack to advocate for workplace changes at the company. Separately, the NLRB accuses Apple of forcing another worker to delete a social media post.

The case stems from a 2021 complaint filed by #AppleToo co-organizer Janneke Parrish. In October of that year, Apple fired Parrish for allegedly sharing confidential information, a claim she denies. Per the complaint, Parrish used Slack and public social media posts to advocate for permanent remote work.

She also shared open letters critical of the tech giant, distributed a pay equity survey, and recounted instances of sexual and racial discrimination at Apple. According to the labor board, Apple’s policies bars employees from creating Slack channels without first obtaining permission from a manager. Instead, workers must direct their workplace concerns to either management or a “People Support” group the company maintains. An example of the type of concerns some employees used Slack to voice can be seen in a 2021 tweet from former Apple employee Ashley Gjøvik.

“We look forward to holding Apple accountable at trial for implementing facially unlawful rules and terminating employees for engaging in the core protected activity of calling out gender discrimination and other civil rights violations that permeated the workplace,” Parrish’s lawyer, Laurie Burgess, told Reuters.

Apple disputes Parrish's claims. "We are and have always been deeply committed to creating and maintaining a positive and inclusive workplace. We take all concerns seriously and we thoroughly investigate whenever a concern is raised and, out of respect for the privacy of any individuals involved, we do not discuss specific employee matters," an Apple spokesperson told Engadget. "We strongly disagree with these claims and will continue to share the facts at the hearing."

Provided Apple does not settle with the agency, an initial hearing is scheduled for February with an administrative judge. The NLRB is looking to force the company to change its policy and reimburse Parrish for the financial hardships she suffered due to her firing. Last week, the NLRB accused Apple of forcing employees to sign illegal and overly broad confidentially, non-disclosure and non-compete agreements.

Update 7:09PM ET: Added comment from Apple. 

This article originally appeared on Engadget at https://www.engadget.com/big-tech/nlrb-accuses-apple-of-illegally-restricting-employee-slack-and-social-media-use-200059723.html?src=rss

US labor board accuses Apple of violating employees’ rights

Apple has been in hot water with the National Labor Relations Board (NLRB) since 2022 when the company was accused of union-busting. It agreed to review its labor practices last January, but the NLRB determined that Apple had violated workers’ rights soon after. Today, the NLRB strikes again, accusing Apple of anti-union practices, denying employees the right to discuss wages and even signing illegal nondisclosure, noncompete and confidentiality agreements.

Truth be told, this is pretty much the same song and dance covered since 2022. These complaints originate from former Apple employees Cher Scarlett and Ashley Gjøvik. They claimed that Apple prohibited wage discussion and that CEO Tim Cook aimed to punish leakers, respectively. Gjøvik also alleged that it prevented staff from talking to reporters.

Apple provided a statement to Reuters, which first reported on this complaint. The company claims it always honors employees’ rights to discuss wages, hours and working conditions. Should Apple not settle the case, an administrative judge will hear it in January.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/us-labor-board-accuses-apple-of-violating-employees-rights-164643503.html?src=rss

The unsealed New Mexico Snapchat lawsuit alleges the company ignored child safety

On September 5, New Mexico Attorney General Raúl Torrez filed a lawsuit against Snap. Torrez claimed that Snapchat has become a platform rife with sexual exploitation, child grooming and other dangerous behaviors. That legal complaint was heavily redacted, but today, Torrez announced in a press release that he has filed an unsealed complaint, which goes into detail on how Snap allegedly knowingly created an environment that exposed children to sexual predators.

The unredacted accusations include how Snap employees encountered 10,000 sextortion cases each month. Still, the company never warned users because it did not want to “strike fear” among them. The statement also mentioned that Snap employees regularly ignored user reports related to grooming and sextortion. An account with 75 separate reports remains active, and Snap refused to touch any of this content, citing “disproportionate admin costs.”

Snapchat’s disappearing messages have long been a draw of the platform, but the suit alleges that they lull users into a false sense of security. Therefore, predators find it easier to solicit explicit images from users before extorting them for money, or these images will be sent to friends and family.

The unredacted complaint also notes that Snapchat’s “Quick Add” feature was suggesting adult strangers to minors, and Snap Map also lets adults find minors’ accounts. Quick Add was even proven to be dangerous as a New Mexico man, Alejandro Marquez, had used it to lure and rape an 11-year-old girl, as mentioned in the complaint.

The complaint also alleges that Snap's upper management routinely ignored former trust and safety employees who pushed for additional and improved safety mechanisms. CEO Evan Spiegel “prioritized design” over safety and even refused to preserve abusive images for review and for law enforcement to use as proof. The company also didn’t keep its child sex abuse images database updated, even rolling back changes and deleting evidence of matches.

Even worse, predators using Snapchat have taken to creating a “Sextortion handbook” to teach others how to target users at schools. Compounded with the fact that 90 percent of all reports are ignored and 30 percent of victims never received any assistance from Snap, predators could essentially roam freely.

That’s not the only issue New Mexico is concerned with. The complaint also accuses Snap of tolerating drug and gun sales. Drug dealers freely used the platform to advertise their wares without repercussions while also gaining “a huge amount of subscribers.” Teens have even died after using drugs they bought after seeing them advertised on Snapchat.

As harmful as these dangers are, Snapchat makes it difficult for parents to monitor their children’s Snapchat use, as only 0.33 percent of teens have joined the Family Center. Snapchat also doesn’t truly verify a user’s age, allowing fake birthdays to pass inspection. This contradicts Snap’s claims that it doesn’t let children under 13 years old use the app.

Based on these accusations, it would be easy to conclude that Snapchat is a dangerous platform for underage users. The National Center on Sexual Exploitation’s Director of Corporate and Strategic Initiatives, Lina Nealon, said: “In my conversations with law enforcement, child safety experts, lawyers, survivors, and youth, I ask them what the most dangerous app is, and without fail, Snap is in the top two.”

In a statement Snap sent to Engadget last month when the lawsuit was filed, the company claimed to be diligently removing bad actors and working with law enforcement. Today, Snap provided the following statement in regards to the unsealed complaint:

"We designed Snapchat as a place to communicate with a close circle of friends, with built-in safety guardrails, and have made deliberate design choices to make it difficult for strangers to discover minors on our service. We continue to evolve our safety mechanisms and policies, from leveraging advanced technology to detect and block certain activity, to prohibiting friending from suspicious accounts, to working alongside law enforcement and government agencies, among so much more.

We care deeply about our work here and it pains us when bad actors abuse our service. We know that no one person, agency, or company can advance this work alone, which is why we are working collaboratively across the industry, government, and law enforcement to exchange information and concept stronger defenses."

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-unsealed-new-mexico-snapchat-lawsuit-alleges-the-company-ignored-child-safety-154235977.html?src=rss