Home United States USA — software The best products and ideas at CES 2024

The best products and ideas at CES 2024

138
0
SHARE

After six grueling days, Dean Takahashi of GamesBeat picks the 19 best products, projects and ideas at CES 2024.
The CES 2023 tech trade show is finally over in Las Vegas, after a grueling six days for me and crowds that numbered perhaps around 130,000.
I walked around a lot to find the coolest tech. At or ahead of CES 2024, I recorded around 80 press events, interviews, and sessions. I walked 46.78 miles, or 105,407 steps, over six days compared with 38.81 miles (or 87,447 steps) over five days a year ago. My feet hurt and my back is sore.
I wrote 67 stories ahead of and during CES and I moderated one panel. I recorded 96 sessions, interviews and product descriptions. I have a lot more stories to write. This story is about the coolest tech I saw in Las Vegas. If you had some FOMO from not going to the show, maybe this list will help fill you in on the good stuff at the show. There are 19 products and projects from CES here, as well as updates on products I’ve seen in years past.
I don’t know about you. But I felt like this was an amazing year for new technology, triggered by refinements in old ideas as well as the gift of generative AI.
Now it’s time to analyze and make some sense of this. I hope you like these ideas, whether they are concepts or finished products. Here’s my list from last year at CES 2023 and the year before at CES 2022.
This year featured nearly 3,500 exhibitors, up from 3,000 exhibitors in 2023, and down from 4,000 (in-person) in 2020. Despite occasional warnings from security guards, I dragged my roller bag all over the place. Here are the things that caught my eye.Supernal S-A2 eVTOL air taxi
Hyundai’s Supernal showed off its latest design for an air taxi at the CES 2024 tech trade show in Las Vegas this week.
Air taxis and flying cars have been making a splash at CES for years now, but it’s encouraging to see a big company like Hyundai taking it seriously and moving forward with designs. I thought it was the most interesting thing I saw at CES this year.
The Supernal air taxi is an electric vertical takeoff and landing aircraft, or eVTOL. The S-A2 is a second-generation design.South Korea’s Hyundai unveiled its first S-A1 eVTOL program at CES in January 2020.
The aircraft will take four passengers. It can cruise at 120 miles per hour and it has eight rotors that can rotaet. It has four in the front of the wing and four in back to balance itself in a vertical take off. The engines will be quiet at 45 decibels during flight and 65 during take off or landing. That’s as quiet as a dishwasher. It can go from zero to 60 miles per hour during takeoff.
This year, the company showed off a VertiPort, or a concept for the places in Los angeles where you could catch an air taxi. As a division of Hyundai Motor Group, the company is not just developing an aircraft. It’s creating a mobility solution. Neil Marshall, head of manufacturing and program management at Supernal, told me it’s creating a mobility solution. The all-electric air taxi will have a range of 25 miles to 40 miles, with each trip taking just minutes instead of an hour in a car.
“It’s a new form of taxi, a new form of mobility,” Marshall said.
The trips will be short hops to places like downtown Los Angeles or Woodland Hills, long enough to skirt traffic in a 40-minute ride, but likely needing a ride-sharing solution to complete the last mile from the VertiPort to the rider’s final destination.
Rather than make a flying car, Marshall thinks the rideshare car and the air taxi make for a better solution for mobility given the infrastructure that’s already in place. At first, you’ll make reservations for air taxis. But at some point you’ll just be able to go and catch the flights that are maybe 10 or 20 minutes apart.
Marshall also showed off the proposed control room where air taxi traffic would be monitored and managed.
The air taxis need an air traffic solution, and they have to stay out of commercial air space for airports like LAX. So they will follow the routes of roads, making it easy to stay on course and out of the way of other air taxis. A flight from Universal Studios to Woodland Hills could take 38 minutes, flying over the roads. But it could take more than an hour at peak times in a car.
Supernal is talking to the FAA and NASA about how to establish the routes properly. The air taxis will have to share the air with devices like drones. Marshall ultimately believes an air traffic control system that is not dependent on human controllers will be the best solution.
“It’s not just about developing an aircraft,” Marshall said. “It’s about developing a mobility solution. We’re looking at the first mile and the last mile.”
Air taxis going one way will be able to fly at one elevation and those going another could fly at a different elevation. The elevation will likely be in the 1,500 feet range.
It’s not clear how much each ride will cost yet. But Marshall said it could start service as early as 2028.Clinatec brain-computer interface
When it comes to a brain-computer interface, Elon Musk’s Neuralink isn’t the only show in town. Clinatec also showed off its research at CES Unveiled during CES 2024. The research organization brings together multidisciplinary experts to treat neurological diseases and restore motor functions for those who have had brain or spinal accidents.
Clinatec is a biomedical research center at Polygone Scientifique in Grenoble, France, with biologists, nanontechnology experts and more. Clinatec was developed by the research division of France’s CEA, Inserm and Universite Grenbole Alpes.
The biomedical research center in Grenoble, France, has developed a non-invasive brain implant that rests on the surface of the brain and helps restore electrical communication in areas of the brain that have been damaged, said Abdelmadjid Hihi, deputy director for scientific affairs and partnerships at Clinatec, in an interview with me.
The cerebal implant records neurological activity and it mimics the signals for making muscles work, said Hihi, who has a doctorate in biological sciences from Lausanne University.
“We’ve been working on brain implant technology,” said Hihi. “The concept is that it is possible to use brain activity that corresponds to movement intent to help people who have severe movement impairments.”
For instance, someone with a spinal cord injury may not be able to move. Clinatec tries to record brain activity with a system that is bio-compatible and has electrodes which can send signals out of the brain. Those signals are then decoded in real time by machine learning algorithm based software, Hihi said.
“And then this information was decoded is recorded into machines that can help people walk again or grasp again, or hear again,” he said. “We basically use this system initially with people who had accidents.”
He showed a video of an injured man taking steps with the help of an exoskeleton, and the man is able to do this with signals stimulating both the brain and the muscles. Clinatec has also worked with people who are paraplegics, giving them stimulation for the muscles to make them function.
The work has been going on for more than 10 years, and the first patient got an implant more than six years ago. France supports the research through a grant, and in the future Hihi said the group wants to work with those who have had strokes and need rehab.
The brain accepts the implant because it consists of bio-compatible materials that are tested before implantation. It doesn’t go inside the brain tissue, but sits on top of it so it reduces inflammation and fibrosis. It makes me think of William Gibson’s sci-fi short story Johnny Mnemonic.Dexcom Stelo glucose monitor
Back in 2020, I wore a Dexcom G6 Pro glucose monitor to see what it would be like to have to monitor my blood sugar as if I had diabetes. I am on the border of that condition, but like many millions of Americans I’m at risk if I start to let me weight get out of control. Now Dexcom has something new in the Stelo.
My interest came out of being a tech narcissist. I’ve been interested for years in how technology can deliver a “quantified self,” or data about myself and how I live. It measures sleep and steps, but what does it really tell us? In the case of a glucose monitor, it tells you the kind of foods that can send your blood sugar into high or low states. For those with Type 1 diabetes, this is a life-or-death situation requiring frequent insulin injections. They have to make sure their blood sugar stays within an acceptable band or they risk fainting or even death.
 As I agreed to test the monitor, I realized that I was going to get a glimpse inside my body that most people never get a chance to do. I could look at what I was doing and what I was eating and figure out what the effect was on my blood sugar.
I was astounded to learn that eating a big pile of spaghetti was one of the things that could push my blood sugar level off the charts and even put me above the 180 milligrams per deciliter threshold that doctors considered to be high.
It’s not just the highs that can be bad news. At the same time, when I went for a jog, I found my glucose levels dropped so much that it dipped below 70 milligrams per deciliter and triggered alerts for me, as if I were in danger of fainting. The lows, known as hypoglycemia, can lead to hunger, trembling, heart racing, nausea, and sweating. When I was out of range, I got a notification on my iPhone.
What’s different about the Dexcom Stelo is that it is designed for Type 2 diabetes patients who don’t need to take insulin. Type 2 diabetes accounts for 90% or more of diabete cases, yet these people don’t always have a clear idea of which foods spike their blood sugar and what to do about it.
Glucose monitors measure the level of sugar in your blood. For diabetic patients, this is critical. Diabetes affects more than 34 million Americans and is the seventh leading cause of death in the United States. The traditional standard of care for glucose monitoring has been a fingerstick meter, which is painful as some patients need to test their blood by pricking their fingers up to 12 times a day.
In a patient with Type 1 diabetes, the pancreas can’t produce the hormone insulin, which helps the body absorb sugar and remove it from your bloodstream. For Type 2 diabetes patients, their body may not be able to produce or process insulin effectively. Either condition means people have to inject themselves with insulin to take their glucose levels down. But they can only do this if they can accurately measure their blood sugar levels in real time, something that hasn’t been possible or convenient until recently.
Stelo will be a wearable device that sends data to your smartphone and has sensor that can last for 15 days.
The pricing will depend on insurance, but the company made the monitor so that it will be affordable for people who don’t have good coverage. It will have a cash pay option for those who don’t have insurance coverage. The good thing is that Moore’s Law is bringing down the cost of these devices.
Stelo is in the midst of getting FDA clearance and it could launch later this year. Nvidia and Convai show AI characters in a ramen shop
Nvidia and Convai showed off another cool demo with AI characters in the ramen shop. The owner of the shop, Jin, is a non-player character with a realistic-looking avatar that you might find in any video game. In the past, AI showed off a limited demo of this scene. But I was able to do a 20-minute demo where I was able to test the generative AI in just about any way I wanted.
Previously, Nvidia and Convai created an amazing demo of the tech with a realistic ramen shop and its characters Jin and Kai. Now Convai showed off the next evolution in AI-driven character technology, which gives characters the ability to hold realistic conversations with players in games. This conversation showed off four different dynamic AI models.
Jin is powered by generative AI, as is his friend Nova, a cybersecurity expert who is a customer in the ramen shop. They were joined by me, playing as the character Kai. I had an open-ended conversation with them. Seth Schneider of Nvidia ACE and Purnendu Mukherjee of Convai guided me.
It starts with micro services from Nvidia ACE called ASR, or automatic speech recognition. It takes your voice and turns it into text. Nvidia puts that into a large language model, which can generate the response for the character. Then it has to be piped into a text-to-speech model, and then that has to be fed into a lip-sync program that matches the character’s lip movements to the voice. All of this has to happen in real time, and I only detected a slight delay, perhaps second-long.
Nova and Jin could have talked among themselves and I could have listened in to that conversation as a fly on the wall. But we skipped that part of the demo when it had a hiccup. Jin did say that making ramen was an art. So I just started asking questions.
The latest from Convai empowers users to engage NPCs dynamically, seeking real-time responses and actions. Characters now execute complex tasks, fetch items, and initiate interactions with other NPCs. Even during idle moments in the game, Convai-enabled NPCs engage with each other, expanding the spectrum of their interactions and perceptions within the virtual world.
Designed to integrate seamlessly with game engines like Unreal Engine and Unity, Convai’s latest iteration redefines the behavior and responsiveness of AI-driven characters within interactive environments.
These NPCs now possess enhanced spatial awareness, dynamic character actions, and the ability to engage in meaningful interactions with their environment and other characters. Beyond scripted responses, Convai-powered characters navigate complex instructions, showcase emotional awareness, and participate in organic interactions. It has the potential to reshape the landscape of gaming narratives.
In a strategic partnership, Convai integrated Nvidia Avatar Cloud Engine (ACE) modules, specifically Nvidia Audio2Face and Riva. This integration enhances the realism, believability, and responsiveness of character interactions.
Nvidia ACE is a suite of technologies leveraging generative AI to bring digital avatars to life. Audio2Face facilitates industry-leading lip sync and facial animations, while Riva delivers high-quality, low-latency speech-to-text capabilities.
Convai’s technology enables developers to swiftly customize NPC backstories, personalities, and knowledge, granting NPCs the ability to respond uniquely and adapt dynamically. Through an intuitive playground or programmatically via the API, creators witness their characters embody spatial awareness and execute a myriad of actions, offering an unprecedented level of dynamic conversation.
I asked Nova if she had any interesting cybersecurity cases lately. She asked if I had something interesting. I asked about the hack against Insomniac Games. She said, “I’m not sure but I can look into it if you want.” I wondered if it was all the leaked info was real. She said, “It’s hard to say but leaks like that usually have at least a grain of truth to them.
I turned to Jin and asked if he could recommend some ramen. He suggested Shio ramen for something light and miso ramen for something adventurous. I ordered the miso ramen. I asked him how the crime in the neighborhood was.

Continue reading...