Drop XDA Versa Keycaps Are Subtle yet Colorful

Gaming keyboards in the 1990s didn’t have all the RGB lighting, macro keys, and other cool (albeit arguably unnecessary) design elements found in many gaming keyboards produced today. If you want to return to simpler times with a retro classic style — without giving up the superior comfort and responsiveness of a modern keyboard — the Drop XDA Versa Keycaps Set is a stylish blast from the past, featuring light gray alpha coloring and a few pastel-colored keys to mix things up.

Designer: Drop

As long as your keyboard is equipped with Cherry MX switches or clones, you can stick pretty much any compatible keycap set onto your keyboard. You can even mix and match the Drop XDA Versa keys with festive kits like Dwarf Factory’s ArtiSANTA. The former is no different, and depending on your needs, you can get the perfect layout across Drop XDA Versa’s three different kits: the Base, Mini Base, and Novelties kits. At a distance, the Base kit seems fashioned toward full-sized and TKL keyboards whereas the Mini Base kit is made for compact 60% keyboards.

Both the Base and Mini Base kits come with every keycap you’ll need for a complete set, though the Novelties kit expands your color palette with a slew of pastel-colored keycaps with unique functions. Each keycap is “made from durable, dye-sublimated PBT and shaped in the short, uniform, and ultra-navigable XDA profile.”

That means this XDA keycap set is excellent for gamers who want a solid sense of feedback and an overall consistent feel between the keys. These keycaps are made for comfort just as much as they’re made for retro style that’s a bit more low-key than many of the stock keycaps shipping in gaming keyboards these days.

You can buy the Drop XDA Versa Keycap Set for a sizable discount. Right now, the Base kit costs $49 (down from $69), the Mini Base kit $39 (down from $59) and the Novelties kit $29 (down from $35). Given that these make a fantastic stocking stuffer for any PC gamer in your life, you may want to take this opportunity to grab these low-key keycaps before the sale ends.

The post Drop XDA Versa Keycaps Are Subtle yet Colorful first appeared on Yanko Design.

Apple reaches $25M settlement with the DOJ for discriminating against US residents during hiring

Apple will pay $25 million in backpay and civil penalties to settle allegations that it favored visa holders and discriminated against US citizens and permanent residents during its hiring process, the Department of Justice said in a statement on Thursday. This is the largest amount that the DOJ has collected under the anti-discrimination provision of the Immigration and Nationality Act.

At the heart of the issue is a federal program administered by the Department of Labor and the Department of Homeland Security called the Permanent Labor Certification Program (PERM). PERM allows US employers to file for foreign workers on visas to become permanent US residents. As part of the PERM process, employers are required to prominently advertise open positions so that anyone can apply to them regardless of citizenship status.

The DOJ said that Apple violated these rules by not advertising PERM positions on their recruiting website, and also made it harder for people to apply by requiring mailed-in paper applications, something that it did not do for regular, non-PERM positions. As a result, a DOJ investigation found that Apple received few or no applications for these positions from US citizens or permanent residents who do not require work visas.

As part of the settlement, Apple will pay $6.75 million in civil penalties and set up a $18.25 million fund to pay back eligible discrimination victims, the DOJ's statement said. 

Apple disagreed with the DOJ’s characterization. “Apple proudly employs more than 90,000 people in the United States and continues to invest nationwide, creating millions of jobs,” a company spokesperson told CNBC. “When we realized we had unintentionally not been following the DOJ standard, we agreed to a settlement addressing their concerns. We have implemented a robust remediation plan to comply with the requirements of various government agencies as we continue to hire American workers and grow in the US”

This article originally appeared on Engadget at https://www.engadget.com/apple-reaches-25m-settlement-with-the-doj-for-discriminating-against-us-residents-during-hiring-225857162.html?src=rss

Apple reaches $25M settlement with the DOJ for discriminating against US residents during hiring

Apple will pay $25 million in backpay and civil penalties to settle allegations that it favored visa holders and discriminated against US citizens and permanent residents during its hiring process, the Department of Justice said in a statement on Thursday. This is the largest amount that the DOJ has collected under the anti-discrimination provision of the Immigration and Nationality Act.

At the heart of the issue is a federal program administered by the Department of Labor and the Department of Homeland Security called the Permanent Labor Certification Program (PERM). PERM allows US employers to file for foreign workers on visas to become permanent US residents. As part of the PERM process, employers are required to prominently advertise open positions so that anyone can apply to them regardless of citizenship status.

The DOJ said that Apple violated these rules by not advertising PERM positions on their recruiting website, and also made it harder for people to apply by requiring mailed-in paper applications, something that it did not do for regular, non-PERM positions. As a result, a DOJ investigation found that Apple received few or no applications for these positions from US citizens or permanent residents who do not require work visas.

As part of the settlement, Apple will pay $6.75 million in civil penalties and set up a $18.25 million fund to pay back eligible discrimination victims, the DOJ's statement said. 

Apple disagreed with the DOJ’s characterization. “Apple proudly employs more than 90,000 people in the United States and continues to invest nationwide, creating millions of jobs,” a company spokesperson told CNBC. “When we realized we had unintentionally not been following the DOJ standard, we agreed to a settlement addressing their concerns. We have implemented a robust remediation plan to comply with the requirements of various government agencies as we continue to hire American workers and grow in the US”

This article originally appeared on Engadget at https://www.engadget.com/apple-reaches-25m-settlement-with-the-doj-for-discriminating-against-us-residents-during-hiring-225857162.html?src=rss

Tumblr’s staff is reportedly reduced to a skeleton crew

Tumblr, a flailing social media site from a bygone era, may be run by a skeleton crew from now on. An alleged internal memo from parent company Automattic has made the rounds on social platforms (including Threads), stating it has “not gotten the expected results from our effort.” The decision appears to mark a sharp U-turn from a separate leak this summer, claiming Automattic was building a TikTok-like algorithmic feed into the aging site.

Although this doesn’t quite appear to be the end of the road for Tumblr, the note doesn’t sound promising for the platform’s future. It says “the majority of the 139 people” will switch to other Automattic projects, leaving a barebones gang of Trust & Safety and support workers to oversee Tumblr’s smoldering embers. Given how many brutal layoffs we’ve seen this year, handling the transition in a way that avoids job losses could be a silver lining.

Automattic, the company behind the blogging tool WordPress, acquired Tumblr in 2019 from Verizon, which landed the platform through its purchase of Yahoo! (Engadget’s parent company) in 2017. It likely didn’t help that its ownership turned into a game of musical chairs, and none of them seemed to find the right formula to get the microblogging network back on its feet. (Its controversial ban on adult content likely had something to do with that.)

“We are at the point where after 600+ person-years of effort put into Tumblr since the acquisition in 2019, we have not gotten the expected results from our effort, which was to have its revenue and usage above its previous peaks,” the alleged memo reads. After throwing in cliches about climbing mountains and being better to try and fail than not to try at all, the note claims the team’s next step is to “reflect and decide where else we should concentrate our energy together.”

Engadget reached out to Automattic for comment and confirmation but didn’t immediately receive a response. We’ll update this article if we hear back.

In addition to WordPress, Automattic’s other brands include the journaling app Day One, the e-commerce plugin WooCommerce, Gravatar and the note-taking app SimpleNote.

This article originally appeared on Engadget at https://www.engadget.com/tumblrs-staff-is-reportedly-reduced-to-a-skeleton-crew-215853169.html?src=rss

Tumblr’s staff is reportedly reduced to a skeleton crew

Tumblr, a flailing social media site from a bygone era, may be run by a skeleton crew from now on. An alleged internal memo from parent company Automattic has made the rounds on social platforms (including Threads), stating it has “not gotten the expected results from our effort.” The decision appears to mark a sharp U-turn from a separate leak this summer, claiming Automattic was building a TikTok-like algorithmic feed into the aging site.

Although this doesn’t quite appear to be the end of the road for Tumblr, the note doesn’t sound promising for the platform’s future. It says “the majority of the 139 people” will switch to other Automattic projects, leaving a barebones gang of Trust & Safety and support workers to oversee Tumblr’s smoldering embers. Given how many brutal layoffs we’ve seen this year, handling the transition in a way that avoids job losses could be a silver lining.

Automattic, the company behind the blogging tool WordPress, acquired Tumblr in 2019 from Verizon, which landed the platform through its purchase of Yahoo! (Engadget’s parent company) in 2017. It likely didn’t help that its ownership turned into a game of musical chairs, and none of them seemed to find the right formula to get the microblogging network back on its feet. (Its controversial ban on adult content likely had something to do with that.)

“We are at the point where after 600+ person-years of effort put into Tumblr since the acquisition in 2019, we have not gotten the expected results from our effort, which was to have its revenue and usage above its previous peaks,” the alleged memo reads. After throwing in cliches about climbing mountains and being better to try and fail than not to try at all, the note claims the team’s next step is to “reflect and decide where else we should concentrate our energy together.”

Engadget reached out to Automattic for comment and confirmation but didn’t immediately receive a response. We’ll update this article if we hear back.

In addition to WordPress, Automattic’s other brands include the journaling app Day One, the e-commerce plugin WooCommerce, Gravatar and the note-taking app SimpleNote.

This article originally appeared on Engadget at https://www.engadget.com/tumblrs-staff-is-reportedly-reduced-to-a-skeleton-crew-215853169.html?src=rss

OpenAI wants to work with organizations to build new AI training datasets

OpenAI is rolling out a new partnership program to collect datasets from third parties that it intends to use to train its AI models. The initiative, OpenAI Data Partnerships, will seek large-scale private and public information that it says is “not already easily accessible online to the public.” The company says the data it will collect doesn't necessarily have to be quantitative or in text formats — the program will also accept images, audio or video.

Notably, the company says it's on the lookout for data on “any topic” and in “any language” so long as it “expresses human intention,” which it likens to long-form essays or transcribed conversations. Human-centric data collected by OpenAI is expected to help the company improve tools like its automatic speech recognition technology which is used to transcribe spoken words. This initiative also lines up with ChatGPT’s recent expansion to support voice queries to engage with users in a conversational manner. Exposing its AI models to more information that teaches it how to hold up human-like conversations will only further improve this feature and other tools that will follow in function.

The model testing conducted throughout the data partnership program will also naturally expand the capabilities of OpenAI’s consumer-facing GPT-4 Turbo, which has been updated to provide users with more complex and meaningful responses. OpenAI says it has already started working with interested organizations, including authoritative bodies like the Icelandic government. Through curated datasets, OpenAI says its working to improve GPT-4’s ability to comprehend queries made in the Icelandic language.

If a private or public organization wants to participate in the program, a representative can submit a form on the company’s website and share information on the data type and size that they intend to share. There are two pathways for datasets. The first is the Open-Source archive, which is ideal for datasets relevant to training language models. However, submissions made to it will be public for anyone to use. Alternatively, OpenAI says a company can submit information through its private dataset pathway which will be funneled to train proprietary AI models, which the company says includes their “foundation models” and “fine-tuned and custom models.” This is recommended for companies or institutions that want to keep their data confidential. But in that same regard, OpenAI says it is not looking for datasets that contain sensitive or personal information.

ChatGPT has already set records for its soaring user base. It has about 100 million weekly active users around the world, meaning privacy will only continue to be a focal point for the tool. Previously, Samsung employees were put in the hot seat for leaking sensitive data to the AI model. While OpenAI claims it does not use data generated by its API to train its models unless a user explicitly submits information through an opt-in form, all eyes will be on how the company handles the data collected through this initiative, especially the private datasets.

This article originally appeared on Engadget at https://www.engadget.com/openai-wants-to-work-with-organizations-to-build-new-ai-training-datasets-214548902.html?src=rss

OpenAI wants to work with organizations to build new AI training datasets

OpenAI is rolling out a new partnership program to collect datasets from third parties that it intends to use to train its AI models. The initiative, OpenAI Data Partnerships, will seek large-scale private and public information that it says is “not already easily accessible online to the public.” The company says the data it will collect doesn't necessarily have to be quantitative or in text formats — the program will also accept images, audio or video.

Notably, the company says it's on the lookout for data on “any topic” and in “any language” so long as it “expresses human intention,” which it likens to long-form essays or transcribed conversations. Human-centric data collected by OpenAI is expected to help the company improve tools like its automatic speech recognition technology which is used to transcribe spoken words. This initiative also lines up with ChatGPT’s recent expansion to support voice queries to engage with users in a conversational manner. Exposing its AI models to more information that teaches it how to hold up human-like conversations will only further improve this feature and other tools that will follow in function.

The model testing conducted throughout the data partnership program will also naturally expand the capabilities of OpenAI’s consumer-facing GPT-4 Turbo, which has been updated to provide users with more complex and meaningful responses. OpenAI says it has already started working with interested organizations, including authoritative bodies like the Icelandic government. Through curated datasets, OpenAI says its working to improve GPT-4’s ability to comprehend queries made in the Icelandic language.

If a private or public organization wants to participate in the program, a representative can submit a form on the company’s website and share information on the data type and size that they intend to share. There are two pathways for datasets. The first is the Open-Source archive, which is ideal for datasets relevant to training language models. However, submissions made to it will be public for anyone to use. Alternatively, OpenAI says a company can submit information through its private dataset pathway which will be funneled to train proprietary AI models, which the company says includes their “foundation models” and “fine-tuned and custom models.” This is recommended for companies or institutions that want to keep their data confidential. But in that same regard, OpenAI says it is not looking for datasets that contain sensitive or personal information.

ChatGPT has already set records for its soaring user base. It has about 100 million weekly active users around the world, meaning privacy will only continue to be a focal point for the tool. Previously, Samsung employees were put in the hot seat for leaking sensitive data to the AI model. While OpenAI claims it does not use data generated by its API to train its models unless a user explicitly submits information through an opt-in form, all eyes will be on how the company handles the data collected through this initiative, especially the private datasets.

This article originally appeared on Engadget at https://www.engadget.com/openai-wants-to-work-with-organizations-to-build-new-ai-training-datasets-214548902.html?src=rss

Retro-inspired LOFREE TOUCH PBT wireless mouse comes with swappable keycaps for matching workspace theme

The good old mouse has evolved into an accessory that can improve your productivity exponentially, that’s if you get the hang of using all the buttons and set the customization options in line with hand ergonomics and muscle memory. Take for example Logitech MX Master 3S, Corsair SCIMITAR RGB ELITE or Logitech G305. In an editor’s hand, any one of these mouse can be a potent tool.

The shape of these high-end mouse has also evolved into a much more modern aesthetic, considering the position of the hand and the multiple buttons. But there’s always a time when you want to experience the retro charm of the good old PC accessory without giving up on the modern functions.

Designer: LOFREE

The old-school LOFREE TOUCH PBT wireless mouse comes with swappable buttons to change the look if you get bored with the existing one. Non-glossy, non-sticky and skin-friendly texture of the mouse keys – MB1, MB2 and the upper case – is soft to the touch of hand and fingers. This ensures you can match it to the setup of your desk or room. The mouse is loaded with the 3805 sensor and PAW3805 outputting 4000 DPI for use on glass or any other surface where an ordinary mouse simply doesn’t work. This also holds merit for high-end gaming for times you are not working.

The rechargeable mouse works for 75 hours before requiring another recharge. Everything set aside, the 80s-inspired look of this mouse is what grabs the attention more than anything else. Add to that the ability to replace the PBT keycaps on top of the Cherry MX-style stems, and you’ve got an accessory that’ll draw you to the desk without fail. The beige-colored mouse weighing 106 grams is a tag on the heavier side, and can be a deal breaker for finicky users. Priced at $69, the retro-inspired mouse is a unique one to add to the collection.

The post Retro-inspired LOFREE TOUCH PBT wireless mouse comes with swappable keycaps for matching workspace theme first appeared on Yanko Design.

A neural network can map large icebergs 10,000 times faster than humans

One of the major benefits of certain artificial intelligence models is that they can speed up menial or time-consuming tasks —- and not just to whip up terrible "art" based on a brief text input. University of Leeds researchers have unveiled a neural network that they claim can map an outline of a large iceberg in just 0.01 seconds.

Scientists are able to track the locations of large icebergs manually. After all, one that was included in this study was the size of Singapore when it broke off from Antarctica a decade ago. But it's not feasible to manually track changes in icebergs' area and thickness — or how much water and nutrients they're releasing into seas.

"Giant icebergs are important components of the Antarctic environment," Anne Braakmann-Folgmann, lead author of a paper on the neural network, told the European Space Agency. "They impact ocean physics, chemistry, biology and, of course, maritime operations. Therefore, it is crucial to locate icebergs and monitor their extent, to quantify how much meltwater they release into the ocean.”

Until now, manual mapping has proven to be more accurate than automated approaches, but it can take a human analyst several minutes to outline a single iceberg. That can rapidly become a time- and labor-intensive process when multiple icebergs are concerned.

The researchers trained an algorithm called U-net using imagery captured by the ESA's Copernicus Sentinel-1 Earth-monitoring satellites. The algorithm was tested on seven icebergs. The smallest had an area roughly the same as Bern, Switzerland and the largest had approximately the same area as Hong Kong.

With 99 percent accuracy, the new model is said to surpass previous attempts at automation, which often struggled to tell the difference between icebergs and sea ice and other features. It's also 10,000 times faster than humans at mapping icebergs.

"Being able to map iceberg extent automatically with enhanced speed and accuracy will enable us to observe changes in iceberg area for several giant icebergs more easily and paves the way for an operational application," Dr. Braakmann-Folgmann said.

This article originally appeared on Engadget at https://www.engadget.com/a-neural-network-can-map-large-icebergs-10000-times-faster-than-humans-212855550.html?src=rss

A neural network can map large icebergs 10,000 times faster than humans

One of the major benefits of certain artificial intelligence models is that they can speed up menial or time-consuming tasks —- and not just to whip up terrible "art" based on a brief text input. University of Leeds researchers have unveiled a neural network that they claim can map an outline of a large iceberg in just 0.01 seconds.

Scientists are able to track the locations of large icebergs manually. After all, one that was included in this study was the size of Singapore when it broke off from Antarctica a decade ago. But it's not feasible to manually track changes in icebergs' area and thickness — or how much water and nutrients they're releasing into seas.

"Giant icebergs are important components of the Antarctic environment," Anne Braakmann-Folgmann, lead author of a paper on the neural network, told the European Space Agency. "They impact ocean physics, chemistry, biology and, of course, maritime operations. Therefore, it is crucial to locate icebergs and monitor their extent, to quantify how much meltwater they release into the ocean.”

Until now, manual mapping has proven to be more accurate than automated approaches, but it can take a human analyst several minutes to outline a single iceberg. That can rapidly become a time- and labor-intensive process when multiple icebergs are concerned.

The researchers trained an algorithm called U-net using imagery captured by the ESA's Copernicus Sentinel-1 Earth-monitoring satellites. The algorithm was tested on seven icebergs. The smallest had an area roughly the same as Bern, Switzerland and the largest had approximately the same area as Hong Kong.

With 99 percent accuracy, the new model is said to surpass previous attempts at automation, which often struggled to tell the difference between icebergs and sea ice and other features. It's also 10,000 times faster than humans at mapping icebergs.

"Being able to map iceberg extent automatically with enhanced speed and accuracy will enable us to observe changes in iceberg area for several giant icebergs more easily and paves the way for an operational application," Dr. Braakmann-Folgmann said.

This article originally appeared on Engadget at https://www.engadget.com/a-neural-network-can-map-large-icebergs-10000-times-faster-than-humans-212855550.html?src=rss