Google begins the rollout of Play Store safety listings

Starting today, you'll start seeing a new section within Play Store listings that show information on how apps collect, store and share data. Google first announced the feature in May 2021 and gave us a glimpse of what it would look like in July. In the data safety section, you won't only see what kind of data the app will collect, but also if the app needs that data to function and if data collection is optional. It will also show why a specific piece of information is collected and whether the developer is sharing your data with third parties.

The developer can also add information on what security measures they practice, such as if they encrypt data in transit and whether you can ask them to delete your information. In addition, the section will show whether an app has validated their security practices against a global standard. And, for parents and guardians of young kids, it can also show whether an app is suitable for children. 

Google
Google

Google says it's rolling out the feature gradually, and the section will start showing up for you in the coming weeks if you don't see it immediately. Take note that the tech giant is giving developers until July 20th to have a data section in place, so some apps might still not have one even if you're already seeing the feature on other listings. 

Google wants devices to know when you’re paying attention

Google has been working on a "new interaction language" for years, and today it's sharing a peek at what it's developed so far. The company is showcasing a set of movements it's defined in its new interaction language in the first episode of a new series called In the lab with Google ATAP. That acronym stands for Advanced Technology and Projects, and it's Google's more-experimental division that the company calls its "hardware invention studio."

The idea behind this "interaction language" is that the machines around us could be more intuitive and perceptive of our desire to interact with them by better understanding our nonverbal cues. "The devices that surround us... should feel like a best friend," senior interaction designer at ATAP Lauren Bedal told Engadget. "They should have social grace."

Specifically (so far, anyway), ATAP is analyzing our movements (as opposed to vocal tones or facial expressions) to see if we're ready to engage, so devices know when to remain in the background instead of bombarding us with information. The team used the company's Soli radar sensor to detect the proximity, direction and pathways of people around it. Then, it parsed that data to determine if someone is glancing at, passing, approaching or turning towards the sensor. 

Google formalized this set of four movements, calling them Approach, Glance, Turn and Pass. These actions can be used as triggers for commands or reactions on things like smart displays or other types of ambient computers. If this sounds familiar, it's because some of these gestures already work on existing Soli-enabled devices. The Pixel 4, for example, had a feature called Motion Sense that will snooze alarms when you wave at it, or wake the phone if it detected your hand coming towards it. Google's Nest Hub Max used its camera to see when you've raised your open palm, and will pause your media playback in response.

Approach feels similar to existing implementations. It allows devices to tell when you (or a body part) are getting closer, so they can bring up information you might be near enough to see. Like the Pixel 4, the Nest Hub uses a similar approach when it knows you're close by, pulling up your upcoming appointments or reminders. It'll also show touch commands on a countdown screen if you're near, and switch to larger, easy-to-read font when you're further away.

While Glance may seem like it overlaps with Approach, Bedal explained that it can be for understanding where a person's attention is when they're using multiple devices. "Say you're on a phone call with someone and you happen to glance at another device in the house," she said. "Since we know you may have your attention on another device, we can offer a suggestion to maybe transfer your conversation to a video call." Glance can also be used to quickly display a snippet of information.

An animation showing how Google's proposed interaction language works. This is an example of the Glance action, where a man looks at a display to his right, and its screen shows a black square reacting in response.
Google

What's less familiar are Turn and Pass. "With turning towards and away, we can allow devices to help automate repetitive or mundane tasks," Bedal said. It can be used to determine when you're ready for the next step in a multi-stage process, like following an onscreen recipe, or something repetitive, like starting and stopping a video. Pass, meanwhile, tells the device you're not ready to engage.

It's clear that Approach, Pass, Turn and Glance build on what Google's implemented in bits and pieces into its products over the years. But the ATAP team also played with combining some of these actions, like passing and glancing or approaching and glancing, which is something we've yet to see much of in the real world. 

For all this to work well, Google's sensors and algorithms need to be incredibly adept not only at recognizing when you're making a specific action, but also when you're not. Inaccurate gesture recognition can turn an experience that's meant to be helpful into one that's incredibly frustrating. 

ATAP's head of design Leonardo Giusti said "That's the biggest challenge we have with these signals." He said that with devices that are plugged in, there is more power available to run more complex algorithms than on a mobile device. Part of the effort to make the system more accurate is collecting more data to train machine learning algorithms on, including the correct actions as well as similar but incorrect ones (so they also learn what not to accept). 

An animation showing one of Google's movements in its new interaction language. The example in this animation is
Google

"The other approach to mitigate this risk is through UX design," Giusti said. He explained that the system can offer a suggestion rather than trigger a completely automated response, to allow users to confirm the right input rather than act on a potentially inaccurate gesture. 

Still, it's not like we're going to be frustrated by Google devices misinterpreting these four movements of ours in the immediate future. Bedal pointed out "What we're working on is purely research. We're not focusing on product integration." And to be clear, Google is sharing this look at the interaction language as part of a video series it's publishing. Later episodes of In the lab with ATAP will cover other topics beyond this new language, and Giusti said it's meant to "give people an inside look into some of the research that we are exploring."

But it's easy to see how this new language can eventually find its way into the many things Google makes. The company's been talking about its vision for a world of "ambient computing" for years, where it envisions various sensors and devices embedded into the many surfaces around us, ready to anticipate and respond to our every need. For a world like that to not feel intrusive or invasive, there are many issues to sort out (protecting user privacy chief among them). Having machines that know when to stay away and when to help is part of that challenge.

Bedal, who's also a professional choreographer, said "We believe that these movements are really hinting to a future way of interacting with computers that feels invisible by leveraging the natural ways that we move."

She added, "By doing so, we can do less and computers can... operate in the background, only helping us in the right moments." 

Facebook is using first-person videos to train future AIs

One of the obvious goals of almost every computer vision project is to enable a machine to see, and perceive, the world as a human does. Today, Facebook has started talkingabout Ego4D, its own effort in this space, for which it has created a vast new data set to train future models. In a statement, the company said that it had recruited 13 universities across nine countries, who had collected 2,200 hours of footage from 700 participants. This footage was taken from the perspective of the user, which can be used to train these future AI models. Kristen Grauman, Facebook’s lead research scientist, says that this is the largest collection of data explicitly created for this focus.

The footage was centered on a number of common experiences in human life, including social interaction, hand and object manipulation and predicting what’s going to happen. It’s, as far as the social network is concerned, a big step toward better computing experiences which, until now, have always focused on sourcing data from the bystander’s perspective. Facebook has said that the data sets will be released in November, “for researchers who sign Ego4D’s data use agreement.” And, next year, researchers from beyond this community will be challenged to better train machines to understand what exactly humans are doing in their lives.

Naturally, there is the angle that Facebook, which now has a camera glasses partnership with Ray Ban, is looking to improve its own capabilities in future. You probably already know about the perils of what this potential surveillance could entail, and why anyone might feel a little leery about the announcement.

Facebook is using first-person videos to train future AIs

One of the obvious goals of almost every computer vision project is to enable a machine to see, and perceive, the world as a human does. Today, Facebook has started talkingabout Ego4D, its own effort in this space, for which it has created a vast new data set to train future models. In a statement, the company said that it had recruited 13 universities across nine countries, who had collected 2,200 hours of footage from 700 participants. This footage was taken from the perspective of the user, which can be used to train these future AI models. Kristen Grauman, Facebook’s lead research scientist, says that this is the largest collection of data explicitly created for this focus.

The footage was centered on a number of common experiences in human life, including social interaction, hand and object manipulation and predicting what’s going to happen. It’s, as far as the social network is concerned, a big step toward better computing experiences which, until now, have always focused on sourcing data from the bystander’s perspective. Facebook has said that the data sets will be released in November, “for researchers who sign Ego4D’s data use agreement.” And, next year, researchers from beyond this community will be challenged to better train machines to understand what exactly humans are doing in their lives.

Naturally, there is the angle that Facebook, which now has a camera glasses partnership with Ray Ban, is looking to improve its own capabilities in future. You probably already know about the perils of what this potential surveillance could entail, and why anyone might feel a little leery about the announcement.

Top Reasons to Study Computer Science

Computer science is still among the top choice for study across the world, but do not let that put you off because there is so much development within the area, and the call for jobs will always remain high. A computer science degree is a perfect way to gain your qualification and also learn in a real-time environment gaining skills that will make you a strong candidate once you have your certification. It is the perfect subject study, and there are many reasons why this is the case, so here are a few to help you decide if computer science…

The post Top Reasons to Study Computer Science appeared first on Trendy Gadget.

Amazon funds STEM programs in Seattle schools

Perhaps with an eye on the next generation of engineers that might be interested in working on its delivery robots or in coding, Amazon is funding computer science and robotics programs at up to 30 public schools in its Seattle home base. From this f...

Learn The Basics of Computer Science for Just $19

Becoming a computer scientist requires a degree and years of study… right? Think again. You can get up to speed on some of the most important tools used by software developers with The Complete Computer Science Bundle.

Want to learn object-oriented programming? This bundle will walk you through the basics of Java, one of the most versatile and widely-used programming languages in existence. You’ll also become a pro at SQL, so you can easily manage the contents of a variety of databases, including MySQL, SQL Server, and more. As for C++, you’ll be speaking this language in no time, thanks to the 75 real-world problems that you’ll go through in that course. There are also courses in C, Python, and working with Raspberry Pi and IOT devices.

Become a computer science wizard with The Complete Computer Science Bundle for only $19 in the Technabob Shop. That’s a heck of a lot cheaper than heading to a university and taking 8 computer science courses.

Master Eight Computer Science Skills for Only $25

A computer science degree can open many doors for your career in the tech industry. But, let’s face it, degrees are expensive and time-consuming. What if you could learn all the knowledge you’d get from a computer science degree, without having to shell out thousands of dollars in tuition and end up knee-deep in debt? That’s why The Complete Computer Science Bundle was created.

The courses will walk you through the basics of computer science, starting with the foundational programming language, C. Aspiring developers can easily master scores of languages upon learning C. Next, you’ll dive into Java, one of the most important tools used in web development. You’ll also learn how to make sense of large amounts of complicated data by discovering SQL. You’ll delve into MySQL and SQL Server to understand how to effectively manage and query large datasets and databases. By the end of the 8 courses and 78+ hours of instruction included in this bundle, you’ll feel comfortable using a variety of programming languages to build apps, games, and websites.

Kick off your Black Friday shopping by increasing your smarts. This week, the special price of The Complete Computer Science Bundle is only $25 in the Technabob Shop.

Learn New Development Technologies with the Ultimate Computer Science Bundle

As we rely more and more on technology in our daily lives, the career opportunities for experts in computer science are also taking off. Grow your knowledge and expand your career choices with The Ultimate Computer Science Career Bundle.

To build a career as a software developer, you need to become well-versed at many different technologies. This bundle will get you up to speed with modern web development frameworks like AngularJS and ReactJS, as well as how to test your work with Selenium, Sikuli, and JUnit.  You’ll also learn how to use Hadoop, Spark, Storm, and Qlikview, so you can work with big data. TensorFlow and the Google Cloud platform will also be at your disposal, with one of the courses dedicated to helping you deliver machine learning algorithms over the cloud. Plus, you’ll also build your tech interview skills to help you secure the perfect job.

Don’t wait to learn the skills that will help your career take off. The Ultimate Computer Science Bundle is only $39(USD) in the Technabob Shop.

Kick off Your Computer Science Career for Only $39 with This Training Bundle

Want to start a career in computer science? Trying to build your development skills? Learn everything you need with The Complete Computer Science Bundle. It’ll get you going for just $39 (USD).

As a foundation, you’ll learn the programming language C, which will serve as an excellent base for the more advanced languages you’ll learn later on, including C++ and Python. You’ll also conquer Java, another highly versatile object-oriented programming tool that powers everything from online games to back office systems. Expert instruction will also make you a pro at database management systems like MySQL.

Launch the coding career you’ve always wanted with The Complete Computer Science Bundle. It’s yours for just $39 in the Technabob Shop.