Virgin Orbit’s days of slinging satellites into space aboard aircraft-launched rockets have come to an end Thursday. After six years in business, Virgin’s satellite launch subsidiary has announced via SEC filing that it does not have the funding to continue operations and will be shuttering for “the foreseeable future,” per CNBC. Nearly 90 percent of Virgin Orbit’s employees — 675 people in total — will be laid off immediately.
Virgin Orbit was founded in 2017 for the purpose of developing and commercializing LauncherOne, a satellite launch system fitted under a modified 747 airliner, dubbed Cosmic Girl. The system was designed to put 500 pounds of cubesats into Low Earth Orbit by firing them in a rocket from said airliner flying at an altitude of 30,000 - 50,000 feet. Despite a string of early successes — both in terms of development milestones and expanding service contracts with the UK military, LauncherOne’s first official test in May of 2020 failed to deliver its simulated payload into orbit.
According to telemetry, LauncherOne has reached orbit! Everyone on the team who is not in mission control right now is going absolutely bonkers. Even the folks on comms are trying really hard not to sound too excited.
In all, Virgin Orbit made six total flights between 2020 and 2023, only four successfully. The most recent attempt was dubbed the Start Me Up event and was supposed to mark the first commercial space launch from UK soil. Despite the rocket successfully separating from its parent aircraft, an upper stage “anomaly” prevented the rocket’s payload from entering orbit. It was later determined that a $100 fuel filter had failed and resulted in the fault.
As TechCrunch points out, Virgin Group founder, Sir Richard Branson, “threw upwards of $55 million to the sinking space company,” in recent months but Start Me Up’s embarrassing failure turned out to be the final straw. On March 16th, Virgin Orbit announced an “operational pause” and worker furlough for its roughly 750 employees as company leadership scrambled to find new funding sources. The company extended the furlough two weeks later and called it quits on Thursday.
“Unfortunately, we’ve not been able to secure the funding to provide a clear path for this company,” Virgin CEO Dan Hart said in an all-hands call obtained by CNBC. “We have no choice but to implement immediate, dramatic and extremely painful changes.”
Impacted employees will reportedly receive severance packages, according to Hart, including a cash payment, continued benefits and a “direct pipeline” to Virgin Galactic’s hiring department. Virgin Orbit’s two top executives will also receive “golden parachute” severances which were approved by the company’s board, conveniently, back in mid-March right when the furloughs first took effect.
This article originally appeared on Engadget at https://www.engadget.com/virgin-orbit-officially-shutters-its-space-launch-operations-231755999.html?src=rss
With extreme weather events regularly flooding our coastal cities and burning out our rural communities, Google in its magnanimity has developed a new set of online tools that civil servants and community organizers alike can use in their efforts to stave off climate change-induced catastrophe.
Google already pushes extreme weather alerts to users in affected locations, providing helpful, easy-to-understand information about the event through the Search page — whether its a winter storm warning, flood advisories, tornado warnings, or what have you. The company has now added extreme heat alerts to that list. Googling details on the event will return everything from the predicted start and end dates of the heatwave to medical issues to be aware of during it and how to mitigate their impacts. The company is partnering with the Global Heat Health Information Network (GHHIN) to ensure that the information provided is both accurate and applicable.
Google
It's a lot easier to keep the citizenry comfortable in hot weather if the cities themselves aren't sweltering, but our love affair with urban concrete has not been amenable to that goal. That's why Google has developed Tree Canopy, a feature within the company's Environmental Insights Explorer app, which "combines AI and aerial imagery so cities can understand their current tree coverage and better plan urban forestry initiatives," per Wednesday's release.
Tree Canopy is already in use in more than a dozen cities but, with Wednesday's announcement, the program will be drastically expanding, out to nearly 350 cities around the world including Atlanta, Sydney, Lisbon and Paris. Google also offers a similarly-designed AI to help plan the installation of "cool roofs" which reflect heat from the sun rather than absorb it like today's tar paper roofs do.
This article originally appeared on Engadget at https://www.engadget.com/google-unveils-ai-powered-planning-tools-to-help-beat-climate-changes-extreme-heat-103039212.html?src=rss
With extreme weather events regularly flooding our coastal cities and burning out our rural communities, Google in its magnanimity has developed a new set of online tools that civil servants and community organizers alike can use in their efforts to stave off climate change-induced catastrophe.
Google already pushes extreme weather alerts to users in affected locations, providing helpful, easy-to-understand information about the event through the Search page — whether its a winter storm warning, flood advisories, tornado warnings, or what have you. The company has now added extreme heat alerts to that list. Googling details on the event will return everything from the predicted start and end dates of the heatwave to medical issues to be aware of during it and how to mitigate their impacts. The company is partnering with the Global Heat Health Information Network (GHHIN) to ensure that the information provided is both accurate and applicable.
Google
It's a lot easier to keep the citizenry comfortable in hot weather if the cities themselves aren't sweltering, but our love affair with urban concrete has not been amenable to that goal. That's why Google has developed Tree Canopy, a feature within the company's Environmental Insights Explorer app, which "combines AI and aerial imagery so cities can understand their current tree coverage and better plan urban forestry initiatives," per Wednesday's release.
Tree Canopy is already in use in more than a dozen cities but, with Wednesday's announcement, the program will be drastically expanding, out to nearly 350 cities around the world including Atlanta, Sydney, Lisbon and Paris. Google also offers a similarly-designed AI to help plan the installation of "cool roofs" which reflect heat from the sun rather than absorb it like today's tar paper roofs do.
This article originally appeared on Engadget at https://www.engadget.com/google-unveils-ai-powered-planning-tools-to-help-beat-climate-changes-extreme-heat-103039212.html?src=rss
Humanity took another step towards its Ghost in the Shell future on Tuesday with Microsoft's unveiling of the new Security Copilot AI at its inaugural Microsoft Secure event. The automated enterprise-grade security system is powered by OpenAI's GPT-4, runs on the Azure infrastructure and promises admins the ability "to move at the speed and scale of AI."
Security Copilot is similar to the large language model (LLM) that drives the Bing Copilot feature, but with a training geared heavily towards network security rather than general conversational knowledge and web search optimization. "This security-specific model in turn incorporates a growing set of security-specific skills and is informed by Microsoft’s unique global threat intelligence and more than 65 trillion daily signals," Vasu Jakkal, Corporate Vice President of Microsoft Security, Compliance, Identity, and Management, wrote Tuesday.
“Just since the pandemic, we’ve seen an incredible proliferation [in corporate hacking incidents],"Jakkal told Bloomberg. For example, “it takes one hour and 12 minutes on average for an attacker to get full access to your inbox once a user has clicked on a phishing link. It used to be months or weeks for someone to get access.”
Security Copilot should serve as a force multiplier for overworked and under-supported network admins, a filed which Microsoft estimates has more than 3 million open positions. "Our cyber-trained model adds a learning system to create and tune new skills," Jakkal explained. "Security Copilot then can help catch what other approaches might miss and augment an analyst’s work. In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture."
Jakkal anticipates these new capabilities enabling Copilot-assisted admins to respond within minutes to emerging security threats, rather than days or weeks after the exploit is discovered. Being a brand new, untested AI system, Security Copilot is not meant to operate fully autonomously, a human admin needs to remain in the loop. “This is going to be a learning system,” she said. “It’s also a paradigm shift: Now humans become the verifiers, and AI is giving us the data.”
To more fully protect the sensitive trade secrets and internal business documents Security Copilot is designed to protect, Microsoft has also committed to never use its customers data to train future Copilot iterations. Users will also be able to dictate their privacy settings and decide how much of their data (or the insights gleaned from it) will be shared. The company has not revealed if, or when, such security features will become available for individual users as well.
This article originally appeared on Engadget at https://www.engadget.com/microsofts-new-security-copilot-will-help-network-admins-respond-to-threats-in-minutes-not-days-174252645.html?src=rss
Humanity took another step towards its Ghost in the Shell future on Tuesday with Microsoft's unveiling of the new Security Copilot AI at its inaugural Microsoft Secure event. The automated enterprise-grade security system is powered by OpenAI's GPT-4, runs on the Azure infrastructure and promises admins the ability "to move at the speed and scale of AI."
Security Copilot is similar to the large language model (LLM) that drives the Bing Copilot feature, but with a training geared heavily towards network security rather than general conversational knowledge and web search optimization. "This security-specific model in turn incorporates a growing set of security-specific skills and is informed by Microsoft’s unique global threat intelligence and more than 65 trillion daily signals," Vasu Jakkal, Corporate Vice President of Microsoft Security, Compliance, Identity, and Management, wrote Tuesday.
“Just since the pandemic, we’ve seen an incredible proliferation [in corporate hacking incidents],"Jakkal told Bloomberg. For example, “it takes one hour and 12 minutes on average for an attacker to get full access to your inbox once a user has clicked on a phishing link. It used to be months or weeks for someone to get access.”
Security Copilot should serve as a force multiplier for overworked and under-supported network admins, a filed which Microsoft estimates has more than 3 million open positions. "Our cyber-trained model adds a learning system to create and tune new skills," Jakkal explained. "Security Copilot then can help catch what other approaches might miss and augment an analyst’s work. In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture."
Jakkal anticipates these new capabilities enabling Copilot-assisted admins to respond within minutes to emerging security threats, rather than days or weeks after the exploit is discovered. Being a brand new, untested AI system, Security Copilot is not meant to operate fully autonomously, a human admin needs to remain in the loop. “This is going to be a learning system,” she said. “It’s also a paradigm shift: Now humans become the verifiers, and AI is giving us the data.”
To more fully protect the sensitive trade secrets and internal business documents Security Copilot is designed to protect, Microsoft has also committed to never use its customers data to train future Copilot iterations. Users will also be able to dictate their privacy settings and decide how much of their data (or the insights gleaned from it) will be shared. The company has not revealed if, or when, such security features will become available for individual users as well.
This article originally appeared on Engadget at https://www.engadget.com/microsofts-new-security-copilot-will-help-network-admins-respond-to-threats-in-minutes-not-days-174252645.html?src=rss
In yet another embarrassing development for new Twitter boss Elon Musk, court filings published Friday reveal that portions of the social media site's source code — the base programming that makes Twitter possible — have been leaked online, the New York Times reports.
Per court filings, Twitter claimed copyright infringement in an effort to have the offending code taken down from the Github collaborative programming network, where it had been posted. While the code was removed the same day, details as to how long the code had been left up were not made available, nor were the leak's scope or depth. As part of the takedown request reminiscent of Raytheon's famous -- failed -- attempt at court-sanctioned doxxing, Twitter also asked the US District Court for the Northern District of California to order Github to reveal both the identity of the user who posted the code and those who accessed and downloaded it.
The executive who spoke with the NYT are primarily concerned that revelations gleaned from the stolen code could empower future hacking efforts, either by revealing new exploits or allowing bad actors to access Twitter user data. If the increasingly temperamental page functionality wasn't enough to send the site's user base running for the hills that the site's resurgence of scammers and white nationalists since Elon's takeover didn't already scare off, will the threat of outright hacking be the final straw for advertisers and users alike?
This article originally appeared on Engadget at https://www.engadget.com/portions-of-twitters-source-code-have-reportedly-leaked-online-234405620.html?src=rss
In yet another embarrassing development for new Twitter boss Elon Musk, court filings published Friday reveal that portions of the social media site's source code — the base programming that makes Twitter possible — have been leaked online, the New York Times reports.
Per court filings, Twitter claimed copyright infringement in an effort to have the offending code taken down from the Github collaborative programming network, where it had been posted. While the code was removed the same day, details as to how long the code had been left up were not made available, nor were the leak's scope or depth. As part of the takedown request reminiscent of Raytheon's famous -- failed -- attempt at court-sanctioned doxxing, Twitter also asked the US District Court for the Northern District of California to order Github to reveal both the identity of the user who posted the code and those who accessed and downloaded it.
The executive who spoke with the NYT are primarily concerned that revelations gleaned from the stolen code could empower future hacking efforts, either by revealing new exploits or allowing bad actors to access Twitter user data. If the increasingly temperamental page functionality wasn't enough to send the site's user base running for the hills that the site's resurgence of scammers and white nationalists since Elon's takeover didn't already scare off, will the threat of outright hacking be the final straw for advertisers and users alike?
This article originally appeared on Engadget at https://www.engadget.com/portions-of-twitters-source-code-have-reportedly-leaked-online-234405620.html?src=rss
The internet has connected nearly everybody on the planet to a global network of information and influence, enabling humanity's best and brightest minds unparalleled collaborative capabilities. At least that was the idea, more often than not these days, it serves as a popular medium for scamming your more terminally-online relatives out of large sums of money. Just ask Brett Johnson, a reformed scam artist who at his rube-bilking pinnacle, was good at separating fools from their cash that he founded an entire online learning forum to train a new generation of digital scam artist.
Johnson's cautionary tale in one of many in the new book, Fool Me Once: Scams, Stories, and Secrets from the Trillion-Dollar Fraud Industry, from Harvard Business Review Press. In it, Professor of Forensic Accounting at DePaul University, Dr. Kelly Richmond Pope, chronicles some of the 20th and 21st century's most heinous financial misdeeds — from Bernie Madoff's pyramid schemes to Enron and VW, and all the Nigerian Princes in between — exploring how the grifts worked and why they often left their marks none the wiser.
I was doing my morning reading before class, and a story about a reformed cybercriminal caught my attention. I always wanted to learn more about cybercrime, but I’d never interacted with a convicted cyber offender. Here was my chance.
I did a quick Google search and found his personal website. I reached out, explained my interest in his story, and waited. By evening, I had an email from gollum@anglerphish.com. I was immediately suspicious, but it was a legit address of Brett Johnson, the man from the article.
After a few email exchanges, we got on a call. He was super friendly and had the voice of a radio DJ. I invited him to come speak to my class at DePaul.
“I teach on Monday nights for the next eight weeks, so whatever works for you will work for me,” I said.
“How about I hop in my car and come visit your class this coming Monday?” he said.
I was a little shocked—Birmingham, Alabama was a long drive— but I immediately took him up on his offer.
Brett was born and raised in Hazard, Kentucky, “one of these areas like the Florida Panhandle and parts of Louisiana, where if you’re not fortunate enough to have a job, you may be involved in some sort of scam, hustle, fraud, whatever you want to call it,” he said.
Maybe there was something in the water because his entire family engaged in fraud. Insurance fraud, document forgery, drug trafficking, mining illegal coal. You name it, Brett’s family did it.
Young Brett was a natural liar. As he grew up, he participated in the family scams.
Eventually, he branched out on his own. His first scam: in 1994, he faked his own car accident. Second scam: eBay fraud.
He reached his peak in the mid-’90s, during the Beanie Baby heyday. The Royal Blue Peanut, essentially a cobalt stuffed elephant toy, sold for as much as $1,700. Only five hundred of the dolls were manufactured, making it one of the most valuable Beanie Babies.
Brett was trying to earn some extra money. A Beanie Baby scam seemed easy and quick.
He advertised on eBay that he was selling Royal Blue Peanut for $1,500. Except he was actually selling a gray Beanie Baby that he dipped in blue dye to look like Royal Blue Peanut for $1,500.
He accepted a bid and instructed the winner to send a US postal money order. “It protects us both,” he said via email. “As soon as I get that and it clears, I’ll send you your elephant.”
The bidder sent Brett the money order; Brett cashed it and sent her his version of the blue Beanie Baby. The phone rang almost immediately.
“This is not what I ordered!” yelled a voice on the other line.
Brett’s response was swift. “Lady, you ordered a blue elephant. I sent you a blue-ish elephant.”
Brett gave her the runaround for a few weeks until she finally disappeared.
This experience taught Brett two very important lessons about cybercrime:
Delay the victim as long as possible.
Victims rarely report the crime and eventually go away.
Brett continued to perfect his skills and graduated to selling pirated software. From pirated software, he moved to install mod chips (a small electronic device used to disable artificial restrictions of computers or entertainment devices) into gaming systems so owners could play the pirated games. Then he began installing mod chips in the cable boxes that would turn on all the pay-per-view on clients’ TV channels for free. Then it was programming satellite DSS cards (the satellite DSS card allows access to tv channels).
He was getting requests for his cable boxes from customers all over the United States and Canada. He was on a roll. Finally, it occurred to him: Why even fulfill the cable box order? Just take the money and run. He knew that no customer would complain about losing money in an illegal transaction. He stole even more money with this updated version of his cable box scam but soon worried that he’d get flagged for money laundering. He decided he needed a fake driver’s license so he could open up a bank account and launder the money through cash taken out of the ATM.
He found a person online who sold fake licenses. He sent a picture, $200, and waited. He waited and waited. Then reality punched him in the face: He’d been scammed. The nerve.
No one hates being deceived more than someone who deceives for a living. Brett was so frustrated he started ShadowCrew.com, an online forum where people could learn the ins and outs of cybercrime. Forbes called it “a one-stop marketplace for identity theft.” The ShadowCrew operated from August 2002 through November 2004, attracting as many as four thousand criminals or aspiring criminals. It’s considered the forerunner of today’s cybercrime forums and marketplaces; Brett is known as the Godfather of Cybercrime.
“Before ShadowCrew, the only avenue you had to commit online crime was a rolling chat board,” he told my students. “It’s called a IRC chat session and stands for Internet Relay Chat.” The problem with these rolling chat screens was that you had no idea if you were talking to a cop or a crook. Either was possible.
ShadowCrew gave criminals a trust mechanism. It was a large communication channel where people in different time zones could reference conversations. “By looking at someone’s screen name, you could tell if you could trust that person, if you could network with that person, or if you could learn from that person,” he said. The screen name on the dark web became the criminal’s brand name. They keep this brand name throughout their entire criminal tenure and it helps establish trust with others, so the screen name matters.
When Brett was in class, he showed my students how information ended up on the dark web. “You can find social security numbers, home addresses, driver’s license numbers, credit card numbers on the dark web for $3,” he explained. All the information is there, practically begging to be taken.
In 2004, authorities arrested twenty-eight men in six countries, claiming they had swapped 1.7 million stolen card numbers and caused $4.3 million in losses. But Brett escaped. He was placed on the Secret Service’s Most Wanted list. After four months on the run, he was arrested.
Brett has been in and out of prison five times and spent 7.5 years in federal prison. Today he considers himself a reformed white-collar offender.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-fool-me-once-kelly-richmond-pope-harvard-business-review-press-143031129.html?src=rss
The internet has connected nearly everybody on the planet to a global network of information and influence, enabling humanity's best and brightest minds unparalleled collaborative capabilities. At least that was the idea, more often than not these days, it serves as a popular medium for scamming your more terminally-online relatives out of large sums of money. Just ask Brett Johnson, a reformed scam artist who at his rube-bilking pinnacle, was good at separating fools from their cash that he founded an entire online learning forum to train a new generation of digital scam artist.
Johnson's cautionary tale in one of many in the new book, Fool Me Once: Scams, Stories, and Secrets from the Trillion-Dollar Fraud Industry, from Harvard Business Review Press. In it, Professor of Forensic Accounting at DePaul University, Dr. Kelly Richmond Pope, chronicles some of the 20th and 21st century's most heinous financial misdeeds — from Bernie Madoff's pyramid schemes to Enron and VW, and all the Nigerian Princes in between — exploring how the grifts worked and why they often left their marks none the wiser.
I was doing my morning reading before class, and a story about a reformed cybercriminal caught my attention. I always wanted to learn more about cybercrime, but I’d never interacted with a convicted cyber offender. Here was my chance.
I did a quick Google search and found his personal website. I reached out, explained my interest in his story, and waited. By evening, I had an email from gollum@anglerphish.com. I was immediately suspicious, but it was a legit address of Brett Johnson, the man from the article.
After a few email exchanges, we got on a call. He was super friendly and had the voice of a radio DJ. I invited him to come speak to my class at DePaul.
“I teach on Monday nights for the next eight weeks, so whatever works for you will work for me,” I said.
“How about I hop in my car and come visit your class this coming Monday?” he said.
I was a little shocked—Birmingham, Alabama was a long drive— but I immediately took him up on his offer.
Brett was born and raised in Hazard, Kentucky, “one of these areas like the Florida Panhandle and parts of Louisiana, where if you’re not fortunate enough to have a job, you may be involved in some sort of scam, hustle, fraud, whatever you want to call it,” he said.
Maybe there was something in the water because his entire family engaged in fraud. Insurance fraud, document forgery, drug trafficking, mining illegal coal. You name it, Brett’s family did it.
Young Brett was a natural liar. As he grew up, he participated in the family scams.
Eventually, he branched out on his own. His first scam: in 1994, he faked his own car accident. Second scam: eBay fraud.
He reached his peak in the mid-’90s, during the Beanie Baby heyday. The Royal Blue Peanut, essentially a cobalt stuffed elephant toy, sold for as much as $1,700. Only five hundred of the dolls were manufactured, making it one of the most valuable Beanie Babies.
Brett was trying to earn some extra money. A Beanie Baby scam seemed easy and quick.
He advertised on eBay that he was selling Royal Blue Peanut for $1,500. Except he was actually selling a gray Beanie Baby that he dipped in blue dye to look like Royal Blue Peanut for $1,500.
He accepted a bid and instructed the winner to send a US postal money order. “It protects us both,” he said via email. “As soon as I get that and it clears, I’ll send you your elephant.”
The bidder sent Brett the money order; Brett cashed it and sent her his version of the blue Beanie Baby. The phone rang almost immediately.
“This is not what I ordered!” yelled a voice on the other line.
Brett’s response was swift. “Lady, you ordered a blue elephant. I sent you a blue-ish elephant.”
Brett gave her the runaround for a few weeks until she finally disappeared.
This experience taught Brett two very important lessons about cybercrime:
Delay the victim as long as possible.
Victims rarely report the crime and eventually go away.
Brett continued to perfect his skills and graduated to selling pirated software. From pirated software, he moved to install mod chips (a small electronic device used to disable artificial restrictions of computers or entertainment devices) into gaming systems so owners could play the pirated games. Then he began installing mod chips in the cable boxes that would turn on all the pay-per-view on clients’ TV channels for free. Then it was programming satellite DSS cards (the satellite DSS card allows access to tv channels).
He was getting requests for his cable boxes from customers all over the United States and Canada. He was on a roll. Finally, it occurred to him: Why even fulfill the cable box order? Just take the money and run. He knew that no customer would complain about losing money in an illegal transaction. He stole even more money with this updated version of his cable box scam but soon worried that he’d get flagged for money laundering. He decided he needed a fake driver’s license so he could open up a bank account and launder the money through cash taken out of the ATM.
He found a person online who sold fake licenses. He sent a picture, $200, and waited. He waited and waited. Then reality punched him in the face: He’d been scammed. The nerve.
No one hates being deceived more than someone who deceives for a living. Brett was so frustrated he started ShadowCrew.com, an online forum where people could learn the ins and outs of cybercrime. Forbes called it “a one-stop marketplace for identity theft.” The ShadowCrew operated from August 2002 through November 2004, attracting as many as four thousand criminals or aspiring criminals. It’s considered the forerunner of today’s cybercrime forums and marketplaces; Brett is known as the Godfather of Cybercrime.
“Before ShadowCrew, the only avenue you had to commit online crime was a rolling chat board,” he told my students. “It’s called a IRC chat session and stands for Internet Relay Chat.” The problem with these rolling chat screens was that you had no idea if you were talking to a cop or a crook. Either was possible.
ShadowCrew gave criminals a trust mechanism. It was a large communication channel where people in different time zones could reference conversations. “By looking at someone’s screen name, you could tell if you could trust that person, if you could network with that person, or if you could learn from that person,” he said. The screen name on the dark web became the criminal’s brand name. They keep this brand name throughout their entire criminal tenure and it helps establish trust with others, so the screen name matters.
When Brett was in class, he showed my students how information ended up on the dark web. “You can find social security numbers, home addresses, driver’s license numbers, credit card numbers on the dark web for $3,” he explained. All the information is there, practically begging to be taken.
In 2004, authorities arrested twenty-eight men in six countries, claiming they had swapped 1.7 million stolen card numbers and caused $4.3 million in losses. But Brett escaped. He was placed on the Secret Service’s Most Wanted list. After four months on the run, he was arrested.
Brett has been in and out of prison five times and spent 7.5 years in federal prison. Today he considers himself a reformed white-collar offender.
This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-fool-me-once-kelly-richmond-pope-harvard-business-review-press-143031129.html?src=rss
OpenAI was forced to take its wildly-popular ChatGPT bot offline for emergency maintenance on Tuesday after a user was able to exploit a bug in the system to recall the titles from other users' chat histories. On Friday the company announced its initial findings from the incident.
In Tuesday's incident, users posted screenshots on Reddit that their ChatGPT sidebars featured previous chat histories from other users. Only the title of the conversation, not the text itself, were visible. OpenAI, in response, took the bot offline for nearly 10 hours to investigate. The results of that investigation revealed a deeper security issue: the chat history bug may have also potentially revealed personal data from 1.2 percent of ChatGPT Plus subscribers (a $20/month enhanced access package).
"In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time," the OpenAI team wrote Friday. The issue has since been patched for the faulty library which OpenAI identified as the Redis client open-source library, redis-py.
The company has downplayed the likelihood of such a breach occurring, arguing that either of the following criteria would have to be met to place a user at risk:
- Open a subscription confirmation email sent on Monday, March 20, between 1 a.m. and 10 a.m. Pacific time. Due to the bug, some subscription confirmation emails generated during that window were sent to the wrong users. These emails contained the last four digits of another user’s credit card number, but full credit card numbers did not appear. It’s possible that a small number of subscription confirmation emails might have been incorrectly addressed prior to March 20, although we have not confirmed any instances of this.
- In ChatGPT, click on “My account,” then “Manage my subscription” between 1 a.m. and 10 a.m. Pacific time on Monday, March 20. During this window, another active ChatGPT Plus user’s first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date might have been visible. It’s possible that this also could have occurred prior to March 20, although we have not confirmed any instances of this.
The company has taken additional steps to prevent this from happening again in the future including adding redundant checks to library calls, "programatically examined our logs to make sure that all messages are only available to the correct user," and "improved logging to identify when this is happening and fully confirm it has stopped." The company says that it has also reached out to alert affected users of the issue.