Google open sources Embedding Projector

Google announced that it will open source its data visualization tool, Embedding Projector. It will be used to help researchers visualize data without using TensorFlow.

Embedding Projector helps researchers create a model that can be considered close to a four dimensional model. Since humans, as of now, can’t visualize the fourth dimension, this tool will help using data and vectors.

For a four dimensional model researchers can create a char in PowerPoint using the X and Y axis measurements but as more it gets more complex it gets harder to visualize in the fourth dimension, this is where Google’s Embedding Projector comes in.

 

Google I/O What to expect

Google I/O is Google’s annual developer conference, a three day conference for developers mto get the latest update from Google on the newest innovations that the company has to offer.

Last year we saw Android M, Android Pay, Google Photos, and Google Cardboard, the company’s VR headset. Here’s what we can expect to see this year.

AI

Lets begin with AI for no other reason than the fact that anyone who is interested in AI has kept up with Google for the past few months. Google has done a lot of work with AI and have persuaded developers to work on their open-sourced AI software. Though the most interesting accomplishment happened when Googles DeepMind AlphaGo AI beat world champion Lee Sedol in one of the most complex games in the world, Go. So it’s not surprising that we might see more innovation from Google on the AI frontier.

Android N 

There’s been a lot of buzz about Android N, the next generation of Google’s smartphone and tablet operating system. Earlier this year in March the company released a developer preview. Android N will have new features such as split-app multitasking, increased battery efficiency, and the ability to reply from within notifications. Like last year we can assume the conference will be heavily influenced by Android N and the latest updates.

Nexus 7

Google recently launched Pixel C, its latest tablet replacement for the Nexus 9. Though it hasn’t had the popularity that Google might have been expecting there have been many rumors that Google would bring back the Nexus 7 tablet.

Chrome OS

Google released its cloud-based operating system seven years go and it has had many mixed reviews, though it’s definitely tilting on the more favorable side for many. We might see new updates to the OS and some integration with Android to create a more single environment.

VR

Its been a big year for reality with so many options from Facebook, Samsung, Sony, and HTC that makes it seem as if Google is falling behind with its Cardboard. Though not to worry as a credible rumor has spread indicating that there will be a new version of Android VR shown at the conference. Which means we might also see more on Project Tango, the experimental augmented-reality software that Google has been working on for some time now.

 

Chirp

It sounds strange at first but when tech website Recode reported last week that Google was designing a competitor to the Amazon Echo code named “Chirp” heads started to turn. Google is already capable of creating advanced virtual assistants and has had made great efforts in voice-recognition, so it makes it all the more likely that Google would create something that it has the ability to create, especially since Google has been developing inside the Internet of Things realm.

Autonomous Cars

In Google I/O 2015 the company announced that its self-driving cars would be driven on California streets. So far there have been great news for self-driving cars, especially Googles’, except for that one incident where an autonomous car crashed into a bus, which was blamed on human error. Google also posted about a potential partnership with Chysler to build autonomous vehicles. We’re sure to see new updates and news for Googles self-driving car.

There are a lot more things to expect from this years Google I/O including Project Tango, Project ARA, Android Wear, Project FI, an update on Nest, and more. Google I/O begins at 10:00 am Pacific Time on Wednesday May 18, don’t miss it! Look out for the app or head over to their official website to watch live streams.

IBMs AI Ross has been hired at a top law firm

A law firm, Baker & Hostetler, has done something risky yet ground breaking, the law firm has hired IBM’s AI Ross, the first artificially intelligent lawyer.

Ross was created by IBM and was built on the popular Watson. Ross is able to read and understand language, create hypotheses when asked questions, do research, and give back responses with references. Ross also learns from experience, can increase response speeds and the more it is interacted with the more knowledge it gains, something that’s becoming very common in modern day AIs.

The website Rossintelligence.com says, “You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly.”

Baker & Hostetler is one of the biggest law firms in the country and have hired Ross to work on bankruptcy cases. Ross will work as a legal researcher, being able to dig through thousands of legal documents. So while Ross may not be present at court or make legal decisions, the AI will be able to cut the time it takes humans to shift through legal documents.

In addition to Baker & Hostetler there are other firms that are seeking to hire Ross. There are others like Ross out there such as Legal by Lex Machina, which mines public court documents to predict how a judge will rule in particular cases; another is CaseText, which digs through thousands of state and federal legal cases to provide data. There is a trend in creating AI that can help dig through, analyze, and provide relevant information faster than humans can.

This push in AI is already making its way to various markets, not just in tech, therefore its likely that we’ll see a shift in jobs. Take Ross for example, Ross does a faster and one can say will do a better job at analyzing and shifting through legal documents, a job that is often given to recent graduates, which for many can be a stepping stone to better opportunities. While AIs can be very beneficial we are starting to see a shift in job placements, the only problem is that its happening too fast.

 

Microsoft Build the future of AI

You might think an event for developers would be mostly boring. These developer events have gone from dazing to incredibly interesting futuristic spectacles and as the technology advances so do these events. Microsoft Build is one of the biggest developer events, though you might be surprised what they actually have in store for you to see and we’re actually going to do just that. Lets cover one of the most interesting developments: Artificial Intelligence.

Cortana

Hold you horses, you’re thinking first we’ll see Cortana, no Windows 10, no HoloLens, but there’s so much more that Microsoft is doing. Microsoft is really pushing software as constantly stated at the event. The first major update is bots! Microsoft is working to be at the heart of computing, through their intelligent bots. They respond through words to provide you with assistance, no lets not think of Tay, which did not have a great start or end. Satya Nadella, CEO of Microsoft, said “We want to build technology so it gets the best of humanity, not the worst. We quickly realized it was not up to this mark. So we’re back to the drawing board.” But it did show that bots can learn from humans, granted some very nasty ones. Nadella laid out the bots for everyone with the human language being the UI layer, bots being new apps, which might make them more responsive, and digital assistants being labeled as the new “meta apps.” When we think of digital assistants we think of Cortana, which is a really well done assistant. But Microsoft isn’t stopping at Cortana. Cortana is more of a central, disconnected, AI but now Microsoft is looking at popular services such as Skype to integrate with Cortana and other intelligent bots so for example Cortana can order your food for you. During the demo a bot from Cups and Cakes requests an address for delivery, using Cortana as the mediator the bot was able to obtain location and arrival time which was then forwarded to the user. The most interesting and maybe a bit intrusive, up to you, was when Cortana suggested the user to talk to a friend who was nearby and even came up with an automatic message.

Having integrated intelligent bots would allow for apps to communicate with each other. They run this through the Cortana Intelligence Suite. In past where you had seamless connectivity with your apps, to sky drive, and to many of devices, you did not have a connection between your apps. This would solve this issue. They used a unique take on intelligent bots by creating a Domino’s Pizza bot that can work with various apps and supports natural language.

Cortana is getting an addition or more connectivity, with your Xbox. Cortana will integrate with your Xbox and help you with your searches and provide you with tips, making your Xbox usage more seamless and easier. She’s also getting an update with the Anniversary Edition of Windows 10, where she will be able to identify what you were working on in the past, which places you visited, and help you make calendar appointments through emails and texts. This is something that Google Now does, where it gathers data and provides you with relevant information using various sources such as search history.

These new bots and network of connectivity will change the way users use apps and improve convenience of apps.

South Korea to invest in AI

South Korea announced that it would invest 1 trillion won ($863 million) into AI (Artificial Intelligence) research for the next 5 years, as a response to Google DeepMind’s AlphaGo beating Go champion Lee Se-dol 4-1.

The defeat began a new stage for not only the advancement for AI but the recognition that AI isn’t a far off futuristic vision, its something that’s changing our lives and our world today. AlphaGo proved its capabilities and now has convinced South Korea that AI is a real investment that it needs to research on. Originally there were plans to begin AI development in 2017 but now plans have changed.

On March 17th the South Korean President Park Geun-hye stated that AI could be a good thing, it would be the fourth revolution, and announced the research and development will be overseen by a council.

Science Ministry Kim Yong-Soo stated that while the exact date is not yet determined it is underway and the institute that oversees it will be located in Pangyo.

 

AlphaGo is setting a new stage for the future of AI

It truly is a great time for artificial intelligence (AI) and more so for us as we have front row seats into the beginnings of the advancement of AI. On March 9th, 2016 Googles DeepMind AlphaGo program beat South Korea’s Lee Se-dol in Go, one of the most complex strategy board games in our world.

AlphaGo is a computer program that was developed by DeepMind, acquired by Google in 2014, to be able to play Go. It became the first Go AI to beat a professional human in 2015.

The game of Go was created in China more than two thousand years ago. The rules of the game allow players to take turns to place black or white stones on a board in an attempt to capture the opponent’s stones or take over territory, empty space, for points. It is regarded as an immensely complex game and according to Google Go has more possible positions than the number of atoms in the universe.

On March 9th, 2016, AI took a step forward when AlphaGo defeated Lee Se-dol, a 9th dan rank South Korean professional Go player considered one of the worlds best players. At the age of 18 Lee became the second best go player, internationally, and has been world champion 18 times. He has 5 games against AlphaGo with a $1 million winning prize. Unfortunately on March 10th he lost his second match against AlphaGo. AlphaGo also won against the European champion Fan Hui, including 499 other matches.

According to the team behind AlphaGo, the program learns by watching other players and learns different patterns, even being able to understand which patterns are good and bad. While games such as chess and checkers are considered simple Go is far more complicated. Many professionals predicted it would take some 10 years for advancement of this kind, so it came as a surprise when AlphaGo beat Lee. Mastering Go has been a challenge for AI, FaceBook is also working on an AI that can play Go, according to Mark Zuckerberg its getting close to being top human players.

Its ability to learn and improve by watching other players, learning from mistakes, and analyzing millions of possible movies sets a new stage for AI.

Deep Learning with Google and Movidius

Google wants to progress their work with Neural networks to make smart phones more powerful and are going to partner with Movidius to use their chips for faster processing in their future devices.

Movidius, a California based company is leading in embedded machine vision technology providing artificial vision intelligence to the the next generation devices. Google confirmed partnership with Movidius for powerful image recognition technology direct to the smartphone. Right now, certain device capabilities are depend on cloud based apps and through those apps the device will communicate with the server for different tasks like facial recognition, street signs, and finger prints.

Google will place Movidius’ powerful MA2450 chip inside Android devices and by doing this Google will make it easier to match images like faces and signs in real time without uploading pictures. This new technology gives devices the ability to understand images and audio with better accuracy and speed.

The MA2450 is the most powerful processing unit that Movidius has under the Myriad family and the Myriad 2 Vision processing unit is the first vision processor that’s always on. Currently Google’s Project Tango uses the Myriad VPU processor. Project Tango is one of the most fascinating projects that can do real time 3D mapping and vision using tablets. The chip can execute 2 trillion 16-bit operations per second, and haul massive amounts of data using less power than before at 500 milliwatts.

Check out project Tango!

Source: Google ATAP|Youtube

This technology allows for better security using facial recognition and those really cool retinal scans you see in futuristic movies. The application for this technology isn’t limited to the smart device market, but in others as well such as the banking and medical markets.

Check out the vision of Movidius!

Source Movidius|Youtube