GTA 6, The Game Awards and the great indie debate | This week’s gaming news

After a slow month in the world of video game marketing, things are starting to pick up. The past week has given us a first look at the new Fallout TV show, a few release dates and a trailer for a little game called Grand Theft Auto VI — and the Game Awards are still to come. What good timing for us to launch a weekly video game show to dig into the news.


This week’s stories

The Game Awards logo
The Game Awards

The Game Awards

The Game Awards will go live on Thursday, December 7, at 7:30PM ET. Expect a few hours of game announcements, new trailers, awkward interviews and musical performances, including one by the fictional band from Alan Wake 2.

Fallout TV show
Amazon MGM Studios

Fallout, but on TV!

Amazon dropped the first trailer for its live-action Fallout series — and, man, it sure does look like Fallout. The show is set in Los Angeles 200 years after the nuclear apocalypse, and it stars Yellowjackets actor Ella Purnell, plus Walton Goggins, Aaron Moten and Kyle MacLachlan. It’s heading to Prime Video on April 12, 2024.


GTA VI is coming in 2025

The biggest news item this week, pre-The Game Awards, was the first official trailer for Grand Theft Auto VI. As of writing it's already reached 105 million views on YouTube — a pace usually reserved for only the finest K-Pop videos. GTA VI is set in Vice City, it’s coming out in 2025 and I'm sure we’ll hear a lot more about it before then.

Dave the Diver
Nexon

What is an indie game?

The meat of this week’s episode focuses on the longstanding debate about what “indie” actually means. One of the titles nominated for Best Independent Game at the Game Awards, Dave the Diver, was commissioned and bankrolled by Nexon, one of the largest video game studios in South Korea. It’s not indie, and its inclusion in this category highlights how little consensus there still is around the definition.

This is kinda my area of expertise — it’s my 13th year as a video game journalist and indie games have always been a core feature of my reporting. I’ve spent a lot of time thinking about what I mean when I say “indie,” so I sat down and formalized this thought process. There are three questions that can help define a game in an indie gray area: Is the team on the mainstream system’s payroll? Is the game or team owned by a platform holder? Do the artists have creative control? I dug into these questions this week, and discuss how having a publisher isn’t related to the indie label at all.

But when all else fails in the indie debate, there’s one ultimate question to ask: Can this team exist without my support? This is why the distinction matters: The indie label helps to identify the artists that would not exist without game sales, crowdfunding or word-of-mouth support from players. It exists to determine the teams that are truly living and dying on game sales, and it helps players decide where to spend their money. If Dave the Diver didn’t sell well, its team would likely have the chance to try again. If, say, Pizza Tower didn’t sell well, its studio could have folded.

I think this is an important conversation, so give that story a read and let us know in the comments if you think my questions help or just make things more confusing. It’s probably a little bit of both.

Now playing

I’ve been thoroughly enjoying The Cosmic Wheel Sisterhood on Steam Deck — it’s the latest game from Deconstructeam, the indie studio that made The Red Strings Club and Gods Will Be Watching. The Cosmic Wheel Sisterhood is a game about building tarot decks, manipulating elections, betraying a coven of witches and seducing everyone; it’s sexy and well-written, and I highly recommend it. Another game I’m looking forward to is A Highland Song from indie studio Inkle; it just came out this week and I’m excited to dive in.

Let us know in the comments what you’re playing! Also, we still don’t know what to call this weekly video game news show, so leave us some name suggestions, too. Thanks!

This article originally appeared on Engadget at https://www.engadget.com/gta-6-the-game-awards-and-the-great-indie-debate--this-weeks-gaming-news-153051306.html?src=rss

Mercedes CLE 53 4MATIC+ Coupe unveiled

Mercedes CLE 53 4MATIC+ Coupe

Mercedes Benz has added a new performance model to its CLE coupe range with the launch of the Mercedes CLE 53 4MATIC+ Coupe the car comes with a 3.0-litre six-cylinder engine that also features an exhaust gas turbocharger and an electric additional compressor that produces 449 horsepower. The progressively forward-sloping front section (“Shark Nose”), in […]

The post Mercedes CLE 53 4MATIC+ Coupe unveiled appeared first on Geeky Gadgets.

Google’s answer to GPT-4 is Gemini: ‘the most capable model we’ve ever built’

OpenAI's spot atop the generative AI heap may be coming to an end as Google officially introduced its most capable large language model to date on Wednesday, dubbed Gemini 1.0. It's the first of “a new generation of AI models, inspired by the way people understand and interact with the world,” CEO Sundar Pichai wrote in a Google blog post.

“Ever since programming AI for computer games as a teenager, and throughout my years as a neuroscience researcher trying to understand the workings of the brain, I’ve always believed that if we could build smarter machines, we could harness them to benefit humanity in incredible ways,” Pichai continued.

The result of extensive collaboration between Google’s DeepMind and Research divisions, Gemini has all the bells and whistles cutting-edge genAIs have to offer. "Its capabilities are state-of-the-art in nearly every domain," Pichai declared. 

The system has been developed from the ground up as an integrated multimodal AI. Many foundational models can be essentially though of groups of smaller models all stacked in a trench coat, with each individual model trained to perform its specific function as a part of the larger whole. That’s all well and good for shallow functions like describing images but not so much for complex reasoning tasks.

Google, conversely, pre-trained and fine-tuned Gemini, “from the start on different modalities” allowing it to “seamlessly understand and reason about all kinds of inputs from the ground up, far better than existing multimodal models,” Pichai said. Being able to take in all these forms of data at once should help Gemini provide better responses on more challenging subjects, like physics.

Gemini can code as well. It’s reportedly proficient in popular programming languages including Python, Java, C++ and Go. Google has even leveraged a specialized version of Gemini to create AlphaCode 2, a successor to last year's competition-winning generativeAI. According to the company, AlphaCode 2 solved twice as many challenge questions as its predecessor did, which would put its performance above an estimated 85 percent of the previous competition’s participants.

While Google did not immediately share the number of parameters that Gemini can utilize, the company did tout the model’s operational flexibility and ability to work in form factors from large data centers to local mobile devices. To accomplish this transformational feat, Gemini is being made available in three sizes: Nano, Pro and Ultra. 

Nano, unsurprisingly, is the smallest of the trio and designed primarily for on-device tasks. Pro is the next step up, a more versatile offering than Nano, and will soon be getting integrated into many of Google’s existing products, including Bard.

Starting Wednesday, Bard will begin using a especially-tuned version of Pro that Google promises will offer “more advanced reasoning, planning, understanding and more.” The improved Bard chatbot will be available in the same 170 countries and territories that regular Bard currently is, and the company reportedly plans to expand the new version's availability as we move through 2024. Next year, with the arrival of Gemini Ultra, Google will also introduce Bard Advanced, an even beefier AI with added features.

Pro’s capabilities will also be accessible via API calls through Google AI Studio or Google Cloud Vertex AI. Search (specifically SGE), Ads, Chrome and Duet AI will also see Gemini functionality integrated into their features in the coming months.

Gemini Ultra won’t be available until at least 2024, as it reportedly requires additional red-team testing before being cleared for release to “select customers, developers, partners and safety and responsibility experts” for testing and feedback.” But when it does arrive, Ultra promises to be an incredibly powerful for further AI development.

This article originally appeared on Engadget at https://www.engadget.com/googles-answer-to-gpt-4-is-gemini-the-most-capable-model-weve-ever-built-150039571.html?src=rss

Google announces new AI processing chips and a cloud ‘hypercomputer’

Undoubtedly, 2023 has been the year of generative AI, and Google is marking its end with even more AI developments. The company has announced the creation of its most powerful TPU (formally known as Tensor Processing Units) yet, Cloud TPU v5p, and an AI Hypercomputer from Google Cloud. "The growth in [generative] AI models — with a tenfold increase in parameters annually over the past five years — brings heightened requirements for training, tuning, and inference," Amin Vahdat, Google's Engineering Fellow and Vice President for the Machine Leaning, Systems, and Cloud AI team, said in a release.

The Cloud TPU v5p is an AI accelerator, training and serving models. Google designed Cloud TPUs to work with models that are large, have long training periods, are mostly made of matrix computations and have no custom operations inside its main training loop, such as TensorFlow or JAX. Each TPU v5p pod brings 8,960 chips when using Google's highest-bandwidth inter-chip interconnect.

The Cloud TPU v5p follows previous iterations like the v5e and v4. According to Google, the TPU v5p has two times greater FLOPs and is four times more scalable when considering FLOPS per pod than the TPU v4. It can also train LLM models 2.8 times faster and embed dense models 1.9 times faster than the TPU v4. 

Then there's the new AI Hypercomputer, which includes an integrated system with open software, performance-optimized hardware, machine learning frameworks, and flexible consumption models. The idea is that this amalgamation will improve productivity and efficiency compared to if each piece was looked at separately. The AI Hypercomputer's performance-optimized hardware utilizes Google's Jupiter data center network technology.

In a change of pace, Google provides open software to developers with "extensive support" for machine learning frameworks such as JAX, PyTorch and TensorFlow. This announcement comes on the heels of Meta and IBM's launch of the AI Alliance, which prioritizes open sourcing (and Google is notably not involved in). The AI Hypercomputer also introduces two models, Flex Start Mode and Calendar Mode. 

Google shared the news alongside the introduction of Gemini, a new AI model that the company calls its "largest and most capable," and its rollout to Bard and the Pixel 8 Pro. It will come in three sizes: Gemini Pro, Gemini Ultra and Gemini Nano. 

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-ai-processing-chips-and-a-cloud-hypercomputer-150031454.html?src=rss

Google’s Gemini AI is coming to Android

Google is bringing Gemini, the new large language model it just introduced, to Android, beginning with the Pixel 8 Pro. The company’s flagship smartphone will run Gemini Nano, a version of the model built specifically to run locally on smaller devices, Google announced in a blog post. The Pixel 8 Pro is powered by the Google Tensor G3 chip designed to speed up AI performance.

This lets the Pixel 8 Pro add several smarts to existing features. The phone’s Recorder app, for instance, has a Summarize feature that currently needs a network connection to give you a summary of recorded conversations, interviews, and presentations. But thanks to Gemini Nano, the phone will now be able to provide a summary without needing a connection at all.

Gemini smarts will also power Gboard’s Smart Reply feature. Gboard will suggest high-quality responses to messages and be aware of context in conversations. The feature is currently available as a developer preview and needs to be enabled in settings. However, it only works with WhatsApp currently and will come to more apps next year.

“Gemini Nano running on Pixel 8 Pro offers several advantages by design, helping prevent sensitive data from leaving the phone, as well as offering the ability to use features without a network connection,” wrote Brian Rakowski, Google Pixel’s vice president of product management.

As part of today’s AI push, Google is upgrading Bard, the company’s ChatGPT rival, with Gemini as well, so you should see significant improvements when using the Pixel’s Assistant with Bard experience. Google is also rolling out a handful of AI-powered productivity and customization updates on other Pixel devices, including the Pixel Tablet and the Pixel Watch, although it isn’t immediately clear what they are.

Gemini model diagram
Google

Gemini Nano is the smallest version of Google's large language model, while Gemini Pro is a larger model that will power not just Bard but other Google services like Search, Ads and Chrome, among others. Gemini Ultra, Google's beefiest model, will arrive in 2024 and will be used to further AI development.

Although today’s updates are focused on the Pixel 8 Pro, Google spoke today about AI Core, an Android 14 service that allows developers to access AI features like Nano. Google says AI Core is designed run on “new ML hardware like the latest Google Tensor TPU and NPUs in flagship Qualcomm Technologies, Samsung S.LSI and MediaTek silicon.” The company adds that “additional devices and silicon partners will be announced in the coming months.”

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-ai-is-coming-to-android-150025984.html?src=rss

AI joins you in the DJ booth with Algoriddim’s djay Pro 5

Algoriddim’s djay Pro software has always had close ties to Apple and often been at the forefront of new DJ tech, especially on Mac, iOS or iPadOS. Today marks the launch of djay Pro version 5 and it includes a variety of novel features, many of which leverage the company’s AI and a new partnership with the interactive team at AudioShake.

There are several buzzy trademarked names to remember this time around including next-generation Neural Mix, Crossfader Fusion and Fluid Beatgrid. These are the major points of interest in djay Pro 5, with only a passing mention of improved stem separation on mobile, UI refreshes for the library and a new simplified Starter Mode that may cater to new users on the platform. The updates include some intriguing AI-automated features that put the system in control of more complex maneuvers. Best of all, existing users get it all for free as part of their subscription.

AudioShake and Algroiddim have been working on their audio separation tech (like many other companies) and are calling this refreshed version Next-generation Neural Mix. We’re told to expect crisp, clear separation of elements from vocals, harmonies and drums. The tools have also been optimized for mobile devices, as long as they run a supported OS.

Fluid Beatbrid is perhaps one of the easiest to understand and seems to be an underlying part of the crossfader updates. Anyone who’s used beatgrids knows they’re rarely perfect on first analysis and often take a bit of work to lock in, especially on tracks that need it. Songs with live instrumentation that tend to shift tempo naturally, EDM with varying tempo shifts during breakdowns and even just older dance tracks that tend to meander slightly throughout playback have been pain points. Fluid Beatgrid is supposed to use AI to accommodate for those shifts and find the right points to mark.

Crossfader Fusion is where stems, automation and those beatgrids all come into play. There are now a variety of settings for the crossfader beyond the usual curves. One of the highlighted modes is the Neural Mix (Harmonic Sustain) setting. This utilizes stem separation and automated level adjustments as you go from one track to the next.

For those who enjoy cutting and scratching, there are crossfade settings that use automated curves and spatial effects so, for example, outgoing track vocals can be dropping out as you cut into the next track automatically. The incoming track’s vocals can be highlighted for scratching and as your mix completes the transition, things are blended together further with AI.

There's even an example provided that shows how you can mix across vastly different BPMs, where the incoming song matches up with a slower outgoing track, but its original tempo is slowly integrated during the transition leaving you with the new faster tempo. 

Existing users should be alerted to the update, but newcomers can find djay Pro version 5 starting today at the App Store. While there will continue to be a free version, the optional Pro subscription costs $7 per month or $50 per year and gives you access to all the features across Mac, iOS and iPhone. Support for the app includes devices running MacOS 10.15 or later and iOS 15 / iPadOS 15 or later.

And as a side note, we’re told that djay Pro for Windows users were leveled up in September and will get Fluid Beatgrid in an update for that platform as soon as next week. Newer features like Crossfader Fusion are expected in the near future.

This article originally appeared on Engadget at https://www.engadget.com/ai-joins-you-in-the-dj-booth-with-algoriddims-djay-pro-5-150007224.html?src=rss

Micron 3500 NVMe Client SSD designed for gaming content creation

gaming content creation SSD

Micron Technology, Inc. has unveiled it’s latest advancement in storage solutions, the Micron 3500 NVMe SSD. This new solid-state drive is designed to meet the rigorous demands of high-performance computing, offering substantial improvements for a range of applications from being and SSD for games or business operations to scientific research, and from immersive gaming to […]

The post Micron 3500 NVMe Client SSD designed for gaming content creation appeared first on Geeky Gadgets.

Shielding Your Android Phone from Security Threats

Android Security

This guide is designed to show you how you can protect your Android Phone from security threats. In today’s technology-driven world, smartphones have become an indispensable part of our lives. We rely on them for everything from communication and entertainment to managing our finances and storing sensitive information. However, this increased reliance also makes our […]

The post Shielding Your Android Phone from Security Threats appeared first on Geeky Gadgets.

EarFun Free Pro 3 Snapdragon Sound ANC wireless earbuds

EarFun Free Pro 3 ANC wireless earbuds

The Free Pro 3 ANC wireless earbuds are among the first to integrate the Snapdragon Sound Certification with Hi-Res audio capabilities. At the heart of the Free Pro 3 lies the Qualcomm QCC3072 SoC, which ensures stable connectivity and superior audio performance via Bluetooth 5.3. This chipset is a powerhouse, incorporating the latest in audio […]

The post EarFun Free Pro 3 Snapdragon Sound ANC wireless earbuds appeared first on Geeky Gadgets.

Device concept lets you monitor and lessen personal carbon footprint

If you’re conscious about how we’ve been treating Mother Earth the past few years, decades, centuries, measuring carbon emissions is something that you’ve probably looked into. There are a lot of tips out there on how you can keep track of your own carbon footprint and how you can slowly lessen it. It may sometimes require a huge lifestyle change and we also need a visible tool to help us do this and see how we can help our environment recover.

Designer: YeEun Kim

The Toad House is a device that looks like a cross between an air purifier and a smart speaker but is actually something you can use to monitor how much carbon emission you’re using when you’re at home and make the necessary adjustments. It is inspired by a Korean children’s song that talks about building a new house from an old one which can be a metaphor in how we can repurpose wasted energy.

The product description can be a bit vague on how the device can actually measure your carbon emissions but it says the interface at the top of the house is where you can check how much you’re already using. This is probably connected to the app on your smartphone where you set targets and also see the values of the various appliances and gadgets in your house. It also says that the wasted power from your devices can be stored and then used for wireless charging later on.

This is still a concept for now but if it eventually becomes a product, it would be interesting to see if a gadget like this can really affect how you use energy. Eventually, there can also be studies if it indeed lessens carbon emissions when you have a visual reminder of how much you’re using and leaving in your environment. Probably what’s needed now though is more education on how people can measure their carbon footprints, at least in their personal use.

The post Device concept lets you monitor and lessen personal carbon footprint first appeared on Yanko Design.