Home United States USA — IT Microsoft Build 2018: Your Phone app, Project Kinect for Azure and other...

Microsoft Build 2018: Your Phone app, Project Kinect for Azure and other key announcements

145
0
SHARE

Microsoft kicked off its Build 2018 developers conference on Monday, ahead of Google’s own I/O 2018 event which is set to start on Tuesday, May 8.
Microsoft kicked off its Build 2018 developers conference on Monday, ahead of Google’s own I/O 2018 event which is set to start on Tuesday, May 8. The Redmond tech giant opened the two-day event to announce some new developments in artificial intelligence (AI), cloud services, Cortana, Windows, Office and some other products and services that will look to set the stage for the rest of the year.
At this year’s Build 2018, Microsoft unveiled some new products and applications such as the Your Phone app, a new mixed reality tool called Microsoft Layout, a demo Alexa and Cortana integration, moving Kinect to the cloud, meeting room and a new partnership with DJI, among other things. Here’s a deeper look at the most important announcements Microsoft made during Day 1 of Build 2018.
Microsoft announced the « Your Phone » app for Windows 10. The app essentially allows you to mirror your phone to a desktop PC so that you can access texts, photos and notifications on your Windows 10 workstation. This app works for both Android and iOS, but as of now the number of Android features that can be mirrored is more than iOS.
The Your Phone app aims to allow Windows 10 users access mobile texts and notifications right on their working desktop without having to lift their phones. It the sort of seamless syncing that Apple already offers between its Macs, iPhones and iPads. Earlier this year, Dell introduced a similar feature as well where it allows calls and notifications to be displayed on a PC. Windows Insiders will get to test the Your Phone app this week, but there is no word yet on when the company plans to roll it out officially for Windows 10 users.
Microsoft also announced a new mixed reality tool, called Microsoft Layout, that works with either a Microsoft HoloLens or a virtual reality headset. The tool will allow you to create a virtual floor plan through the headset where you can move 3D models that are of real-scale, giving you a preview of how a room would look and feel. As per a demo, you will be able to go to a physical location and see that layout via the HoloLens, allowing you to figure out whether the 3D models fit alongside the existing ones.
The Layout tool may sound similar to other augmented reality applications that let you create a virtual spaces and move 3D objects to plan your layout. But while those may be more simplistic and consumer-friendly in nature, Microsoft sees Layout as a more business oriented tool for precise planning.
The Redmond giant also announced a new partnership with DJI to build a drone SDK for Windows 10 that will allow full flight control and data transfer to Windows 10 PCs. Additionally, the SDK will also make it possible to integrate third-party hardware like multispectral sensors or custom actuators with DJI drones easily. Using applications written for Windows 10 PCs, DJI drones can be customised and controlled for a wide range of industrial purposes.
DJI will also be using Microsoft Azure as its preferred cloud computing partner, which will allow it to use Azure’s AI and machine learning capabilities to turn « vast quantities of aerial imagery and video data into actionable insights for thousands of businesses across the globe. » The DJI SDK for Window 10 is currently only available as a beta preview to attendees of the Build 2018 conference.
Last year, Microsoft announced that it will integrate its Cortana with Amazon’s Alexa so that consumers can access either one of the voice-enabled AIs through the other. At the Build 2018 event, Microsoft demonstrated how this integration would work. You will be able to invoke Cortana on an Alexa-powered device by saying, « Alexa, open Cortana. » Following this, you can ask questions or command it to send an email or actions that Alexa would normally not be able to do alone. Essentially, integrating the two virtual assistants will try to make up for the other’s shortcomings as both the companies look to take on Google’s Assistant.
Similarly, you will be able to summon Alexa through Cortana on a Windows 10 device to perform actions that only Alexa can. Both Microsoft and Amazon are working on ways to integrate the assistants it is currently in a beta testing. There is no word yet on when the integration will be rolling out to the general public.
Kinect, a motion sensor that was first launched back in 2010 for the company’s Xbox 360 gaming console is still very much alive and kicking despite killing it off late last year. Microsoft on Monday announced that it moving Kinect to the cloud with what it is calling Project Kinect for Azure. What moving Kinect to the cloud means is that the company will be combining the depth sensor with Azure AI services that could help developers make devices that are more precise with less power consumption, Microsoft’s Alex Kipman explained in a LinkedIn blog post.
« Kinect, when we first launched it in 2010, was a speech-first, gaze-first, vision-first device. It was used in gaming, and then, later on, it came to the PC, and it was used in many applications: medical, industrial, robotics, education, » said Nadella during the keynote. « We’ve been inspired by what developers have done, and since Kinect, we’ve made a tremendous amount of progress when it comes to some of the foundational technologies in HoloLens. So we’re taking those advances and packaging them up as Project Kinect for Azure. »
Microsoft also demonstrated « meeting room » – a prototype hardware that looks to make holding meetings a whole lot easier. The hardware includes a 360-degree camera and microphone array that can pair with Microsoft’s AI services and 365 to help with tasks like identification, real-time translation, and transcribe exactly what is being said in a meeting regardless of the language.
In the demo, the hardware was able to identify and transcribe what each person was saying. The AI tools also allow it to perform certain actions depending on what the person is saying. For example, if someone says, « I’ll follow up with you next week, » you will get a notification on Microsoft Teams.

Continue reading...