Being able to detect AI content is extremely important in certain situations such as education always checking factual content. Recently a new version of Originality AI has been released as version 3 providing a major upgrade to its AI detection process. But how good is it that actually detecting AI content in discerning the differences […]
Google has launched the Gemini AI model across its Workspace applications, including Gmail, Docs, Sheets, and Slides. This rollout is aimed at both personal and business users, with different access plans tailored to each group. Gemini Advanced is a paid subscription that provides access to the ultra version of the Gemini model for individual users, […]
Apple will be unveiling its new Mac lineup next month, this will include new 2024 MacBooks a new MacBook Air and also possibly a new Mac Mini and more. Now we have a great video from ZONEofTech that gives us more details on what to expect from the 2024 range of Macs. The focus will […]
Google's Gemini chatbot, which was formerly called Bard, has the capability to whip up AI-generated illustrations based on a user's text description. You can ask it to create pictures of happy couples, for instance, or people in period clothing walking modern streets. As the BBC notes, however, some users are criticizing Google for depicting specific white figures or historically white groups of people as racially diverse individuals. Now, Google has issued a statement, saying that it's aware Gemini "is offering inaccuracies in some historical image generation depictions" and that it's going to fix things immediately.
We're aware that Gemini is offering inaccuracies in some historical image generation depictions. Here's our statement. pic.twitter.com/RfYXSgRyfz
According to Daily Dot, a former Google employee kicked off the complaints when he tweeted images of women of color with a caption that reads: "It's embarrassingly hard to get Google Gemini to acknowledge that white people exist." To get those results, he asked Gemini to generate pictures of American, British and Australian women. Other users, mostly those known for being right-wing figures, chimed in with their own results, showing AI-generated images that depict America's founding fathers and the Catholic Church's popes as people of color.
In our tests, asking Gemini to create illustrations of the founding fathers resulted in images of white men with a single person of color or woman in them. When we asked the chatbot to generate images of the pope throughout the ages, we got photos depicting black women and Native Americans as the leader of the Catholic Church. Asking Gemini to generate images of American women gave us photos with a white, an East Asian, a Native American and a South Asian woman. The Verge says the chatbot also depicted Nazis as people of color, but we couldn't get Gemini to generate Nazi images. "I am unable to fulfill your request due to the harmful symbolism and impact associated with the Nazi Party," the chatbot responded.
Gemini's behavior could be a result of overcorrection, since chatbots and robots trained on AI over the past years tended to exhibit racist and sexist behavior. In one experiment from 2022, for instance, a robot repeatedly chose a Black man when asked which among the faces it scanned was a criminal. In a statement posted on X, Gemini Product Lead Jack Krawczyk said Google designed its "image generation capabilities to reflect [its] global user base, and [it takes] representation and bias seriously." He said Gemini will continue to generate racially diverse illustrations for open-ended prompts, such as images of people walking their dog. However, he admitted that "[h]istorical contexts have more nuance to them and [his team] will further tune to accommodate that."
We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.
As part of our AI principles https://t.co/BK786xbkey, we design our image generation capabilities to reflect our global user base, and we…
This article originally appeared on Engadget at https://www.engadget.com/google-promises-to-fix-geminis-image-generation-following-complaints-that-its-woke-073445160.html?src=rss
Apple’s latest beta update to its operating system, iOS 17.4 beta 4, has been making waves. This iteration brings a suite of enhancements aimed at refining the user experience. For aficionados and casual users alike, understanding these updates can provide a glimpse into the future of mobile technology. Let’s delve into the nitty-gritty of what […]
Intel's relatively new Foundry division — formerly known as Intel Foundry Services until earlier today — has just landed a notable order from a big name. According to Bloomberg and The Wall Street Journal, Microsoft CEO Satya Nadella announced that his company will be tapping into Intel's latest 18A (1.8nm) fabrication process for an upcoming in-house chip design. But given Intel's process roadmap, this means we likely won't be seeing Microsoft's new chip until 2025.
While neither company disclosed the nature of said silicon, Microsoft did unveil its custom-made Azure Maia AI Accelerator and Azure Cobalt 100 CPU server chips last November, with an expected rollout some time "early" this year to bolster its own AI services. The Cobalt 100 is based on Arm architecture, and it just so happens that Intel has been optimizing its 18A process for Arm designs since April last year (it even became an Arm investor later), so there's a good chance that this collaboration may lead to the next-gen Cobalt CPU.
In addition to the usual efficiency improvements as node size decreases, Intel 18A also offers "the industry's first backside power solution" which, according to IEEE's Spectrum, separates the power interconnect layer from the data interconnect layer at the top, and moves the former to beneath the silicon substrate — as implied by the name. This apparently allows for improved voltage regulation and lower resistance, which in turn enable faster logic and lower power consumption, especially when applied to 3D stacking.
Intel
In Intel's Q4 earnings call, CEO Pat Gelsinger confirmed that "18A is expected to achieve manufacturing readiness in second half '24." Given that Intel's very own 18A-based processors — "Clearwater Forest" for servers and "Panther Lake" for clients — won't arrive until 2025, chances are it'll be a similar time frame for Microsoft's next chip.
At Intel's event earlier today, the exec shared an extended Intel Foundry process technology roadmap, which features a new 14A (1.4nm) node enabled by ASML's "High-NA EUV" (high-numerical aperture extreme ultraviolet) lithography system. According to AnandTech, this 14A leap may help Intel play catchup after its late EUV adoption for its Intel 4 (7nm) node, though risk production won't take place until the end of 2026.
Intel Foundry is the brainchild of Gelsinger, who launched this department right after he assumed the CEO role in February 2021, as part of his ambitious plan to put Intel up against the likes of TSMC and Samsung in the contract chip-making market. Before Microsoft, Intel Foundry's list of clients already include MediaTek, Qualcomm and Amazon. The company still aims to become "the second largest external foundry by 2030" in terms of manufacturing revenue, which it believes is achievable as early as this year.
This article originally appeared on Engadget at https://www.engadget.com/microsofts-upcoming-custom-chip-will-be-made-by-intel-063323035.html?src=rss
Google has launched a new suite of artificial intelligence models named Gemma, which includes the advanced Gemma 2B and Gemma 7B. These models are designed to provide developers and researchers with robust tools that prioritize safety and reliability in AI applications. The release of Gemma marks a significant step in the field of AI, offering […]
Samsung said Wednesday that the Galaxy S24’s AI features will arrive on last year’s phones (including foldables) and tablets in late March. In January, Engadget’s Sam Rutherford reported that the AI suite would soon be available on the Galaxy S23 series, Z Fold 5, Z Flip 5 and Tab S9. Today’s announcement makes that device list official while adding the more specific arrival window of late March 2024.
That group of 2023 devices will receive a software update next month with the AI features from the S24 series. Those include communication-based AI tricks like Chat Assist (adjusts message tone and translates messages), Live Translate (real-time voice and text translations) and Interpreter (split-screen translation for in-person conversations).
They’ll also get the productivity-based AI features Circle to Search (search for anything on your screen by drawing a ring around it), Note Assist (formatting, summaries and translations of notes), Browsing Assist (summaries of news articles) and Transcript Assist (transcribe and summarize meeting recordings).
Finally, image-based AI features coming to those devices include Generative Edit (reframe shots, move subjects around or delete and replace them), Edit Suggestion (recommended image tweaks), and Instant Slow-Mo (generate extra frames to transform a standard video into a slow-motion one).
Photo by Sam Rutherford / Engadget
The full list of devices receiving the update starting in March includes the Galaxy S23, Galaxy S23+, Galaxy S23 Ultra, Galaxy S23 FE, Galaxy Z Fold 5, Galaxy Z Flip 5 and Galaxy Tab S9. But Samsung says you can expect more devices to join them later. “This is only the beginning of Galaxy AI, as we plan to bring the experience to over 100 million Galaxy users within 2024 and continue to innovate ways to harness the unlimited possibilities of mobile AI,” Samsung President TM Roh wrote in a press release.
We were mostly impressed with the AI features in our Galaxy S24 Ultra review. “While harnessing AI might not be a super exciting development now that everyone and their grandmother is trying to shoehorn it into everything, it does make the S24 Ultra a more powerful and well-rounded handset,” Engadget’s Sam Rutherford wrote in January.
Although he noticed a few hiccups in the AI tools at launch, he found most of them to be a genuinely helpful complement to the phone’s high-end hardware. “Samsung finally has an answer to the sophisticated features that were previously only available from the Pixel family,” he wrote. “Sure, the S24’s tools aren’t quite as polished as Google’s offerings, but they get you 80 to 90 percent of the way there.”
This article originally appeared on Engadget at https://www.engadget.com/your-older-s23-phone-will-get-samsungs-galaxy-ai-suite-in-late-march-030016691.html?src=rss
Last year, a global survey crowned KeyShot as the “Best Rendering Software,” with 88% of designers overwhelmingly picking it for its incredibly photorealistic rendering capabilities. Now, with KeyShot’s newly unveiled Physics Simulation and Camera Keyframe features, the software is growing even more powerful, bringing real-world physics and camera effects to make your renders pop even more.
I put KeyShot’s Physics Simulation feature to the ultimate test by rendering a dramatic domino chain reaction scene. Setting up the simulation took hardly any time, with incredibly easy controls that took mere minutes to get the hang of. The results were jaw-dropping if I say so myself. In this article, I’ll show you how I managed to pull off one of my most exciting KeyShot rendering experiences ever. I’ll walk you through how I set the domino scene up, what parameters I input into the Physics Simulation window, and how you can recreate this scene, too. I’ll also share tips and tricks that can help you create some incredibly real simulations with objects falling, bouncing, and colliding with each other, absolutely enhancing your KeyShot rendering experience to a level like never before.
The entire scene was modeled in Rhino 7, starting by building one single domino, creating a spiral curve, and arraying multiple dominoes along the curve. The dominos were spaced at roughly 2 centimeters apart, ensuring the chain reaction would go smoothly from start to finish. The entire scene has a whopping 1182 dominoes in total; a little ambitious considering I was going to render the simulation on a 2022 gaming laptop.
Tilt the first domino to help kickstart the physics cycle
To use the simulation feature, import your scene into the latest version of KeyShot (2023-24) (get a free trial here), set the scale, add the materials, and pick the right environment. Before you use the physics feature, however, you need to prime your scene – in this case, it meant tilting the first domino forward so gravity would kick in during the simulation. The Physics Simulation feature can be found in the ‘Tools’ section on top. Clicking on it opens a separate window with a preview viewport, a bunch of settings, and an animation timeline on the bottom.
The Physics Simulation feature can be found in the Tools window
To begin with, pick the parts you want to apply physics to (these are the parts that will be influenced by gravity, so don’t pick stuff that remains stationary, like ground objects). The parts you don’t select will still influence your physics because moving objects will still collide with them. Once you’ve chosen what parts you want to move (aka the dominoes), select the ‘Shaded’ option so you can see them clearly in the viewport.
The settings on the left are rather basic but extremely powerful. You start by first setting the maximum simulation time (short animations require short simulations; considering mine was a long chain reaction, I chose 200 seconds), followed by Keyframes Per Second – This basically tells KeyShot to make your animation more detailed or choppy (think FPS, but for simulation). I prefer selecting 25 keyframes per second since I’m rendering my animation at 25fps (just to keep the simulation light), but you can bump things up to 60 keyframes per second, which gives your simulation smoother detail. You can then bump up your animation FPS to render high frame-rate videos that can then be slowed down for dramatic slow motion. Simulation quality dictates how well KeyShot factors the physics in – it’s at a default of 0.1, although if you feel like your simulation looks off, bump it up to a higher value.
The Physics Simulation Window
The remaining settings pertain to gravity and material properties. The gravity is set at Earth’s default of 9.81 m/s² – increasing it makes items heavier (and fall faster), and decreasing it makes objects float around for longer before descending. I set mine at 11 m/s² just to make sure the dominoes fall confidently. Friction determines the amount of drag caused by two colliding objects – setting a higher friction causes more surface interference, like dropping a cube on a ramp made of rubber, and reducing the friction enables smooth sliding, like the same cube on a polished metal ramp. To ensure that the dominos don’t stick to each other like they were made of rubber, I reduced my friction setting to 0.4. Finally, a Bounciness feature lets you determine how two objects collide – the lower this setting, the less bounce-back, the higher the setting, the more the rebound. Given that I didn’t want my dominos bouncing off each other, I set this at a low of 0.01. Once you’re done, hit the Begin Simulation button to watch the magic unfold.
If you aren’t happy with your simulation, you can stop it mid-way and troubleshoot. Usually, tinkering with the settings helps achieve the right simulation, but here’s something I learned, too – bigger objects fall slower than smaller objects, so playing around with the size and scale of your model can really affect the simulation. If, however, you’re happy with your simulation (you can run through it in the video timeline below), just hit the blue ‘OK’ button, and you’ve successfully rendered your first physics simulation!
The simulation then becomes a part of KeyShot’s Animation timeline, and you can then play around with camera angles and movements to capture your entire scene just the way you visualized it. I created multiple clips of my incredibly long domino chain reaction (in small manageable chunks because my laptop crashed at least 8 times during this) and stitched them together in a video editing app.
Comparing KeyShot and Blender’s Physics Control Panels
The Physics Simulation feature in KeyShot 2023-24 is incredibly impressive. For starters, it’s a LOT easier than other software like Blender, which can feel a little daunting with the hundreds of settings it has you choose from. Figuring out physics simulation in KeyShot takes just a few minutes (although the actual simulation can take a while if you’re running something complex), making an already powerful rendering software feel even more limitless!
That being said, there’s some room for growth. Previous experiments with the simulation tool saw some strange results – falling objects sometimes ended up choosing their own direction, making the simulation feel odd (I made a watch fall down and the entire thing disassembled and scattered in mid-air instead of falling together and breaking apart on impact). Secondly, sometimes objects can go through each other instead of colliding, so make sure you tinker with quality settings to get the perfect result. Thirdly, you can’t choose different bounciness values for different objects in the same simulation just yet, although I’m sure KeyShot is working on it. Finally, it would be absolutely amazing if there were a ‘slow-motion’ feature. The current way to do this is to bump up the keyframe rate and bring down the gravity, but that can sometimes cause objects to drift away after colliding instead of falling downwards in slow motion.
So there you have it! You can use this tutorial to animate your own domino sequence, too, or better still, create a new simulation based on your own ideas! If you do, make sure to participate in the 2024 KeyShot Animation Challenge to stand a chance to win some exciting prizes. Hurry! The competition ends on March 10th, 2024!