SpaceX wants to put Starlink internet on rural school buses

Starlink satellite internet access has already spread to boats and RVs, and now it might accompany your child on the way home from class. SpaceX told the FCC in a filing that it's piloting Starlink aboard school buses in the rural US. The project would keep students connected during lengthy rides (over an hour in the pilot), ensuring they can complete internet-related homework in a timely fashion even if broadband is slow or non-existent at home.

The spaceflight company simultaneously backed FCC chair Jessica Rosenworcel's May proposal to bring WiFi to school buses, and said it supported the regulator's efforts to fund school and library internet access through the E-Rate program. To no one's surprise, SpaceX felt it had the best solution thanks to rapid satellite deployment, portable dishes and fast service for the "most remote" areas.

We've asked the FCC and SpaceX for comment, and will let you know if they respond. The pitch comes just two months after the FCC cleared the use of Starlink in vehicles, noting that it would serve the "public interest" to keep people online while on the move. The concept isn't new — Google outfitted school buses with WiFi in 2018 following tests, for example.

There's no guarantee the FCC will embrace SpaceX and fund bus-based Starlink service. The Commission rejected SpaceX's request for $885.5 million in help through the Rural Digital Opportunity Fund, and the firm responded by blasting the rejection as "grossly unfair" and allegedly unsupported by evidence. Satellite internet service theoretically offers more consistent rural coverage than cellular data, though, and Starlink competitors like Amazon's Project Kuiper have yet to deploy in earnest.

SpaceX wants to put Starlink internet on rural school buses

Starlink satellite internet access has already spread to boats and RVs, and now it might accompany your child on the way home from class. SpaceX told the FCC in a filing that it's piloting Starlink aboard school buses in the rural US. The project would keep students connected during lengthy rides (over an hour in the pilot), ensuring they can complete internet-related homework in a timely fashion even if broadband is slow or non-existent at home.

The spaceflight company simultaneously backed FCC chair Jessica Rosenworcel's May proposal to bring WiFi to school buses, and said it supported the regulator's efforts to fund school and library internet access through the E-Rate program. To no one's surprise, SpaceX felt it had the best solution thanks to rapid satellite deployment, portable dishes and fast service for the "most remote" areas.

We've asked the FCC and SpaceX for comment, and will let you know if they respond. The pitch comes just two months after the FCC cleared the use of Starlink in vehicles, noting that it would serve the "public interest" to keep people online while on the move. The concept isn't new — Google outfitted school buses with WiFi in 2018 following tests, for example.

There's no guarantee the FCC will embrace SpaceX and fund bus-based Starlink service. The Commission rejected SpaceX's request for $885.5 million in help through the Rural Digital Opportunity Fund, and the firm responded by blasting the rejection as "grossly unfair" and allegedly unsupported by evidence. Satellite internet service theoretically offers more consistent rural coverage than cellular data, though, and Starlink competitors like Amazon's Project Kuiper have yet to deploy in earnest.

LastPass was hacked, but it says no user data was compromised

In August, LastPass had admitted that an "unauthorized party" gained entry into its system. Any news about a password manager getting hacked can be alarming, but the company is now reassuring its users that their logins and other information weren't compromised in the event.

In his latest update about the incident, LastPass CEO Karim Toubba said that the company's investigation with cybersecurity firm Mandiant has revealed that the bad actor had internal access to its systems for four days. They were able to steal some of the password manager's source code and technical information, but their access was limited to the service's development environment that isn't connected to customers' data and encrypted vaults. Further, Toubba pointed out that LastPass has no access to users' master passwords, which are needed to decrypt their vaults.

The CEO said there's no evidence that this incident "involved any access to customer data or encrypted password vaults." They also found no evidence of unauthorized access beyond those four days and of any traces that the hacker injected the systems with malicious code. Toubba explained that the bad actor was able to infiltrate the service's systems by compromising a developer's endpoint. The hacker then impersonated the developer "once the developer had successfully authenticated using multi-factor authentication." 

Back in 2015, LastPass suffered a security breach that compromised users' email addresses, authentication hashes, password reminders and other information. A similar breach would be more devastating today, now that the service supposedly has over 33 million registered customers. While, LastPass isn't asking users to do anything to keep their data safe this time, it's always good practice not to reuse passwords and to switch on multi-factor authentication.

LastPass was hacked, but it says no user data was compromised

In August, LastPass had admitted that an "unauthorized party" gained entry into its system. Any news about a password manager getting hacked can be alarming, but the company is now reassuring its users that their logins and other information weren't compromised in the event.

In his latest update about the incident, LastPass CEO Karim Toubba said that the company's investigation with cybersecurity firm Mandiant has revealed that the bad actor had internal access to its systems for four days. They were able to steal some of the password manager's source code and technical information, but their access was limited to the service's development environment that isn't connected to customers' data and encrypted vaults. Further, Toubba pointed out that LastPass has no access to users' master passwords, which are needed to decrypt their vaults.

The CEO said there's no evidence that this incident "involved any access to customer data or encrypted password vaults." They also found no evidence of unauthorized access beyond those four days and of any traces that the hacker injected the systems with malicious code. Toubba explained that the bad actor was able to infiltrate the service's systems by compromising a developer's endpoint. The hacker then impersonated the developer "once the developer had successfully authenticated using multi-factor authentication." 

Back in 2015, LastPass suffered a security breach that compromised users' email addresses, authentication hashes, password reminders and other information. A similar breach would be more devastating today, now that the service supposedly has over 33 million registered customers. While, LastPass isn't asking users to do anything to keep their data safe this time, it's always good practice not to reuse passwords and to switch on multi-factor authentication.

Microsoft Teams has been storing authentication tokens in plaintext

Microsoft Teams stores authentication tokens in unencrypted plaintext mode, allowing attackers to potentially control communications within an organization, according to the security firm Vectra. The flaw affects the desktop app for Windows, Mac and Linux built using Microsoft's Electron framework. Microsoft is aware of the issue but said it has no plans for a fix anytime soon, since an exploit would also require network access.

According to Vectra, a hacker with local or remote system access could steal the credentials for any Teams user currently online, then impersonate them even when they're offline. They could also pretend to be the user through apps associated with Teams, like Skype or Outlook, while bypassing the multifactor authentication (MFA) usually required. 

"This enables attackers to modify SharePoint files, Outlook mail and calendars, and Teams chat files," Vectra security architect Connor Peoples wrote. "Even more damaging, attackers can tamper with legitimate communications within an organization by selectively destroying, exfiltrating, or engaging in targeted phishing attacks."

Attackers can tamper with legitimate communications within an organization by selectively destroying, exfiltrating, or engaging in targeted phishing attacks.

Vectra created a proof-of-concept exploit that allowed them to send a message to the account of the credential holder via an access token. "Assuming full control of critical seats–like a company’s Head of Engineering, CEO, or CFO — attackers can convince users to perform tasks damaging to the organization."  

The problem is mainly limited to the desktop app, because the Electron framework (that essentially creates a web app port) has "no additional security controls to protect cookie data," unlike modern web browsers. As such, Vectra recommends not using the desktop app until a patch is created, and using the web application instead.

When informed by cybersecurity news site Dark Reading of the vulnerability, Microsoft said it "does not meet our bar for immediate servicing as it requires an attacker to first gain access to a target network," adding that it would consider addressing it in a future product release. 

However, threat hunter John Bambenek told Dark Reading it could provide a secondary means for "lateral movement" in the event of a network breach. He also noted that Microsoft is moving toward Progressive Web Apps that "would mitigate many of the concerns currently brought by Electron."

Microsoft Teams has been storing authentication tokens in plaintext

Microsoft Teams stores authentication tokens in unencrypted plaintext mode, allowing attackers to potentially control communications within an organization, according to the security firm Vectra. The flaw affects the desktop app for Windows, Mac and Linux built using Microsoft's Electron framework. Microsoft is aware of the issue but said it has no plans for a fix anytime soon, since an exploit would also require network access.

According to Vectra, a hacker with local or remote system access could steal the credentials for any Teams user currently online, then impersonate them even when they're offline. They could also pretend to be the user through apps associated with Teams, like Skype or Outlook, while bypassing the multifactor authentication (MFA) usually required. 

"This enables attackers to modify SharePoint files, Outlook mail and calendars, and Teams chat files," Vectra security architect Connor Peoples wrote. "Even more damaging, attackers can tamper with legitimate communications within an organization by selectively destroying, exfiltrating, or engaging in targeted phishing attacks."

Attackers can tamper with legitimate communications within an organization by selectively destroying, exfiltrating, or engaging in targeted phishing attacks.

Vectra created a proof-of-concept exploit that allowed them to send a message to the account of the credential holder via an access token. "Assuming full control of critical seats–like a company’s Head of Engineering, CEO, or CFO — attackers can convince users to perform tasks damaging to the organization."  

The problem is mainly limited to the desktop app, because the Electron framework (that essentially creates a web app port) has "no additional security controls to protect cookie data," unlike modern web browsers. As such, Vectra recommends not using the desktop app until a patch is created, and using the web application instead.

When informed by cybersecurity news site Dark Reading of the vulnerability, Microsoft said it "does not meet our bar for immediate servicing as it requires an attacker to first gain access to a target network," adding that it would consider addressing it in a future product release. 

However, threat hunter John Bambenek told Dark Reading it could provide a secondary means for "lateral movement" in the event of a network breach. He also noted that Microsoft is moving toward Progressive Web Apps that "would mitigate many of the concerns currently brought by Electron."

Hitting the Books: How can privacy survive in a world that never forgets?

As I write this, Amazon is announcing its purchase of iRobot, adding its room-mapping robotic vacuum technology to the company's existing home surveillance suite, the Ring doorbell and prototype aerial drone. This is in addition to Amazon already knowing what you order online, what websites you visit, what foods you eat and, soon, every last scrap of personal medical data you possess. But hey, free two-day shipping, amirite?  

The trend of our gadgets and infrastructure constantly, often invasively, monitoring their users shows little sign of slowing — not when there's so much money to be made. Of course it hasn't been all bad for humanity, what with AI's help in advancing medical, communications and logistics tech in recent years. In his new book, Machines Behaving Badly: The Morality of AI, Scientia Professor of Artificial Intelligence at the University of New South Wales, Dr. Toby Walsh, explores the duality of potential that artificial intelligence/machine learning systems offer and, in the excerpt below, how to claw back a bit of your privacy from an industry built for omniscience.

Machines Behaving Badly Cover
La Trobe University Press

Excerpted from Machines Behaving Badly: The Morality of AI by Toby Walsh. Published by La Trobe University Press. Copyright © 2022 by Toby Walsh. All rights reserved.


Privacy in an AI World

The Second Law of Thermodynamics states that the total entropy of a system – the amount of disorder – only ever increases. In other words, the amount of order only ever decreases. Privacy is similar to entropy. Privacy is only ever decreasing. Privacy is not something you can take back. I cannot take back from you the knowledge that I sing Abba songs badly in the shower. Just as you can’t take back from me the fact that I found out about how you vote.

There are different forms of privacy. There’s our digital online privacy, all the information about our lives in cyberspace. You might think our digital privacy is already lost. We have given too much of it to companies like Facebook and Google. Then there’s our analogue offline privacy, all the information about our lives in the physical world. Is there hope that we’ll keep hold of our analogue privacy?

The problem is that we are connecting ourselves, our homes and our workplaces to lots of internet-enabled devices: smartwatches, smart light bulbs, toasters, fridges, weighing scales, running machines, doorbells and front door locks. And all these devices are interconnected, carefully recording everything we do. Our location. Our heartbeat. Our blood pressure. Our weight. The smile or frown on our face. Our food intake. Our visits to the toilet. Our workouts.

These devices will monitor us 24/7, and companies like Google and Amazon will collate all this information. Why do you think Google bought both Nest and Fitbit recently? And why do you think Amazon acquired two smart home companies, Ring and Blink Home, and built their own smartwatch? They’re in an arms race to know us better.

The benefits to the companies our obvious. The more they know about us, the more they can target us with adverts and products. There’s one of Amazon’s famous ‘flywheels’ in this. Many of the products they will sell us will collect more data on us. And that data will help target us to make more purchases.

The benefits to us are also obvious. All this health data can help make us live healthier. And our longer lives will be easier, as lights switch on when we enter a room, and thermostats move automatically to our preferred temperature. The better these companies know us, the better their recommendations will be. They’ll recommend only movies we want to watch, songs we want to listen to and products we want to buy.

But there are also many potential pitfalls. What if your health insurance premiums increase every time you miss a gym class? Or your fridge orders too much comfort food? Or your employer sacks you because your smartwatch reveals you took too many toilet breaks?

With our digital selves, we can pretend to be someone that we are not. We can lie about our preferences. We can connect anonymously with VPNs and fake email accounts. But it is much harder to lie about your analogue self. We have little control over how fast our heart beats or how widely the pupils of our eyes dilate.

We’ve already seen political parties manipulate how we vote based on our digital footprint. What more could they do if they really understood how we respond physically to their messages? Imagine a political party that could access everyone’s heartbeat and blood pressure. Even George Orwell didn’t go that far.

Worse still, we are giving this analogue data to private companies that are not very good at sharing their profits with us. When you send your saliva off to 23AndMe for genetic testing, you are giving them access to the core of who you are, your DNA. If 23AndMe happens to use your DNA to develop a cure for a rare genetic disease that you possess, you will probably have to pay for that cure. The 23AndMe terms and conditions make this very clear:

You understand that by providing any sample, having your Genetic Information processed, accessing your Genetic Information, or providing Self-Reported Information, you acquire no rights in any research or commercial products that may be developed by 23andMe or its collaborating partners. You specifically understand that you will not receive compensation for any research or commercial products that include or result from your Genetic Information or Self-Reported Information.

A Private Future

How, then, might we put safeguards in place to preserve our privacy in an AI-enabled world? I have a couple of simple fixes. Some regulatory and could be implemented today. Others are technological and are something for the future, when we have AI that is smarter and more capable of defending our privacy.

The technology companies all have long terms of service and privacy policies. If you have lots of spare time, you can read them. Researchers at Carnegie Mellon University calculated that the average internet user would have to spend 76 work days each year just to read all the things that they have agreed to online. But what then? If you don’t like what you read, what choices do you have?

All you can do today, it seems, is log off and not use their service. You can’t demand greater privacy than the technology companies are willing to provide. If you don’t like Gmail reading your emails, you can’t use Gmail. Worse than that, you’d better not email anyone with a Gmail account, as Google will read any emails that go through the Gmail system.

So here’s a simple alternative. All digital services must provide four changeable levels of privacy.

Level 1: They keep no information about you beyond your username, email and password.

Level 2: They keep information on you to provide you with a better service, but they do not share this information with anyone.

Level 3: They keep information on you that they may share with sister companies.

Level 4: They consider the information that they collect on you as public.

And you can change the level of privacy with one click from the settings page. And any changes are retrospective, so if you select Level 1 privacy, the company must delete all information they currently have on you, beyond your username, email and password. In addition, there’s a requirement that all data beyond Level 1 privacy is deleted after three years unless you opt in explicitly for it to be kept. Think of this as a digital right to be forgotten.

I grew up in the 1970s and 1980s. My many youthful transgressions have, thankfully, been lost in the mists of time. They will not haunt me when I apply for a new job or run for political office. I fear, however, for young people today, whose every post on social media is archived and waiting to be printed off by some prospective employer or political opponent. This is one reason why we need a digital right to be forgotten.

More friction may help. Ironically, the internet was invented to remove frictions – in particular, to make it easier to share data and communicate more quickly and effortlessly. I’m starting to think, however, that this lack of friction is the cause of many problems. Our physical highways have speed and other restrictions. Perhaps the internet highway needs a few more limitations too?

One such problem is described in a famous cartoon: ‘On the internet, no one knows you’re a dog.’ If we introduced instead a friction by insisting on identity checks, then certain issues around anonymity and trust might go away. Similarly, resharing restrictions on social media might help prevent the distribution of fake news. And profanity filters might help prevent posting content that inflames.

On the other side, other parts of the internet might benefit from fewer frictions. Why is it that Facebook can get away with behaving badly with our data? One of the problems here is there’s no real alternative. If you’ve had enough of Facebook’s bad behaviour and log off – as I did some years back – then it is you who will suffer most. You can’t take all your data, your social network, your posts, your photos to some rival social media service. There is no real competition. Facebook is a walled garden, holding onto your data and setting the rules. We need to open that data up and thereby permit true competition.

For far too long the tech industry has been given too many freedoms. Monopolies are starting to form. Bad behaviours are becoming the norm. Many internet businesses are poorly aligned with the public good.

Any new digital regulation is probably best implemented at the level of nation-states or close-knit trading blocks. In the current climate of nationalism, bodies such as the United Nations and the World Trade Organization are unlikely to reach useful consensus. The common values shared by members of such large transnational bodies are too weak to offer much protection to the consumer.

The European Union has led the way in regulating the tech sector. The General Data Protection Regulation (GDPR), and the upcoming Digital Service Act (DSA) and Digital Market Act (DMA) are good examples of Europe’s leadership in this space. A few nation-states have also started to pick up their game. The United Kingdom introduced a Google tax in 2015 to try to make tech companies pay a fair share of tax. And shortly after the terrible shootings in Christchurch, New Zealand, in 2019, the Australian government introduced legislation to fine companies up to 10 per cent of their annual revenue if they fail to take down abhorrent violent material quickly enough. Unsurprisingly, fining tech companies a significant fraction of their global annual revenue appears to get their attention.

It is easy to dismiss laws in Australia as somewhat irrelevant to multinational companies like Google. If they’re too irritating, they can just pull out of the Australian market. Google’s accountants will hardly notice the blip in their worldwide revenue. But national laws often set precedents that get applied elsewhere. Australia followed up with its own Google tax just six months after the United Kingdom. California introduced its own version of the GDPR, the California Consumer Privacy Act (CCPA), just a month after the regulation came into effect in Europe. Such knock-on effects are probably the real reason that Google has argued so vocally against Australia’s new Media Bargaining Code. They greatly fear the precedent it will set.

That leaves me with a technological fix. At some point in the future, all our devices will contain AI agents helping to connect us that can also protect our privacy. AI will move from the centre to the edge, away from the cloud and onto our devices. These AI agents will monitor the data entering and leaving our devices. They will do their best to ensure that data about us that we don’t want shared isn’t.

We are perhaps at the technological low point today. To do anything interesting, we need to send data up into the cloud, to tap into the vast computational resources that can be found there. Siri, for instance, doesn’t run on your iPhone but on Apple’s vast servers. And once your data leaves your possession, you might as well consider it public. But we can look forward to a future where AI is small enough and smart enough to run on your device itself, and your data never has to be sent anywhere.

This is the sort of AI-enabled future where technology and regulation will not simply help preserve our privacy, but even enhance it. Technical fixes can only take us so far. It is abundantly clear that we also need more regulation. For far too long the tech industry has been given too many freedoms. Monopolies are starting to form. Bad behaviours are becoming the norm. Many internet businesses are poorly aligned with the public good.

Digital regulation is probably best implemented at the level of nation-states or close-knit trading blocks. In the current climate of nationalism, bodies such as the United Nations and the World Trade Organization are unlikely to reach useful consensus. The common values shared by members of such large transnational bodies are too weak to offer much protection to the consumer.

The European Union has led the way in regulating the tech sector. The General Data Protection Regulation (GDPR), and the upcoming Digital Service Act (DSA) and Digital Market Act (DMA) are good examples of Europe’s leadership in this space. A few nation-states have also started to pick up their game. The United Kingdom introduced a Google tax in 2015 to try to make tech companies pay a fair share of tax. And shortly after the terrible shootings in Christchurch, New Zealand, in 2019, the Australian government introduced legislation to fine companies up to 10 per cent of their annual revenue if they fail to take down abhorrent violent material quickly enough. Unsurprisingly, fining tech companies a significant fraction of their global annual revenue appears to get their attention.

It is easy to dismiss laws in Australia as somewhat irrelevant to multinational companies like Google. If they’re too irritating, they can just pull out of the Australian market. Google’s accountants will hardly notice the blip in their worldwide revenue. But national laws often set precedents that get applied elsewhere. Australia followed up with its own Google tax just six months after the United Kingdom. California introduced its own version of the GDPR, the California Consumer Privacy Act (CCPA), just a month after the regulation came into effect in Europe. Such knock-on effects are probably the real reason that Google has argued so vocally against Australia’s new Media Bargaining Code. They greatly fear the precedent it will set.

Inaccurate maps are delaying the Bipartisan Infrastructure Law’s broadband funding

Nearly nine months after Congress passed President Biden’s $1 trillion infrastructure bill, the federal government has yet to allocate any of the $42.5 billion in funding the legislation set aside for expanding broadband service in underserved communities, according to The Wall Street Journal. Under the law, the Commerce Department can’t release that money until the Federal Communications Commission (FCC) publishes new coverage maps that more accurately show homes and businesses that don’t have access to high-speed internet.

Inaccurate coverage data has long derailed efforts by the federal government to address the rural broadband divide. The previous system the FCC used to map internet availability relied on Form 477 filings from service providers. Those documents have been known for their errors and exaggerations. In 2020, Congress began requiring the FCC to collect more robust coverage data as part of the Broadband DATA Act. However, it wasn’t until early 2021 that lawmakers funded the mandate and in August of that same year that the Commission published its first updated map.

Following a contractor dispute, the FCC will publish its latest maps sometime in mid-November. Once they're available, both consumers and companies will a chance to challenge the agency’s data. As a result of that extra step, funding from the broadband plan likely won’t begin making its way to ISPs until the end of 2023, according to one analyst The Journal interviewed.

“We understand the urgency of getting broadband out there to everyone quickly,” Alan Davidson, the head of the Commerce Department unit responsible for allocating the funding, told the Journal. “We also know that we get one shot at this and we want to make sure we do it right.”

Inaccurate maps are delaying the Bipartisan Infrastructure Law’s broadband funding

Nearly nine months after Congress passed President Biden’s $1 trillion infrastructure bill, the federal government has yet to allocate any of the $42.5 billion in funding the legislation set aside for expanding broadband service in underserved communities, according to The Wall Street Journal. Under the law, the Commerce Department can’t release that money until the Federal Communications Commission (FCC) publishes new coverage maps that more accurately show homes and businesses that don’t have access to high-speed internet.

Inaccurate coverage data has long derailed efforts by the federal government to address the rural broadband divide. The previous system the FCC used to map internet availability relied on Form 477 filings from service providers. Those documents have been known for their errors and exaggerations. In 2020, Congress began requiring the FCC to collect more robust coverage data as part of the Broadband DATA Act. However, it wasn’t until early 2021 that lawmakers funded the mandate and in August of that same year that the Commission published its first updated map.

Following a contractor dispute, the FCC will publish its latest maps sometime in mid-November. Once they're available, both consumers and companies will a chance to challenge the agency’s data. As a result of that extra step, funding from the broadband plan likely won’t begin making its way to ISPs until the end of 2023, according to one analyst The Journal interviewed.

“We understand the urgency of getting broadband out there to everyone quickly,” Alan Davidson, the head of the Commerce Department unit responsible for allocating the funding, told the Journal. “We also know that we get one shot at this and we want to make sure we do it right.”

The man who built his own ISP to avoid huge fees is expanding his service

Given a choice between settling for pathetically slow internet speeds from AT&T or paying Comcast $50,000 to expand to his rural home, Michigan resident Jared Mauch chose option "C": starting up his own fiber internet service provider. Now, he's expanding his service from about 70 customers to nearly 600 thanks to funding aimed at expanding access to broadband internet, Ars Technica has reported. 

Last year, the US government's Coronavirus State and Local Fiscal Recovery Funds allocated $71 million to Michigan's Washtenaw county for infrastructure projects, with a part of that dedicated to broadband expansion. Mauch subsequently won a bid to wire up households "known to be unserved or underserved based on [an] existing survey," according to the RFP.

"They had this gap-filling RFP, and in my own wild stupidity or brilliance, I'm not sure which yet, I bid on the whole project [in my area] and managed to win through that competitive bidding process," he told Ars

He'll now need to expand from 14 to about 52 miles of fiber to complete the project, including at least a couple of homes that require a half mile of fiber for a single house. That'll cost $30,000 for each of those homes, but his installation fees are typically $199.

Customers can choose from 100Mbps up/down internet speeds for $55 per month, or 1Gbps with unlimited data for $79 a month. The contract requires completion by 2026, but he aims to be done by around the end of 2023. He's already hooked up some of the required addresses, issuing a press release after the first was connected in June, with a local commissioner calling it "a transformational moment for our community." 

Running an ISP isn't even Mauch's day job, as he normally works as an Akamai network architect. Still, his service has become a must in the region and he even provides fiber backhaul for a major mobile carrier. "I'm definitely a lot more well-known by all my neighbors... I'm saved in people's cell phones as 'fiber cable guy,'" he said. Check out the full story at Ars Technica