AI Traffic Lights to manage traffic in Pittsburgh

Pittsburgh is testing AI traffic lights to reduce traffic, made by Surtrac. The smart traffic management system gave amazing results.

The team started implementing the AI traffic control system at 9 intersections in Pittsburgh’s East Liberty neighborhood in 2012. It later began to expand citywide and now spans 50 intersections.

The system reduced travel time by 25 percent and idling time by over 40 percent. The researchers estimate that the system cuts emissions by 21 percent. Stephen Smith, Carnegie Mellon University professor of robotics and founder of startup Surtrac, stated at the White House Frontiers Conference that Traffic congestion costs the U.S. economy $121 billion a year, mostly due to lost productivity, and produces about 25 billion kilograms of carbon dioxide emissions.

Conventional traffic lights have programmed timing, which are not updated regularly. Surtrac developed smart artificial intelligence traffic signals that adapt to the change in traffic conditions. The system relies coordinating traffic lights. Radar sensors and cameras are attached to each light detect the flow of traffic. AI algorithms then take the data and build a timing plan that according to Smith “that moves all the vehicles it knows about through the intersection in the most efficient way possible.” Smith says each signal makes its own timing decision and is not centralized.

The next step is to have the system talk to cars. The team has installed short ranged radios at 24 intersections and in the future could let drivers know of traffic conditions ahead of them or know which lights are about to change.

Apple takes a different approach to privacy

On Monday Apple displayed many new features surrounding its products though one feature showed just how much Apple has improved when it comes to privacy.

Apple announced their use of artificial intelligence, features like facial recognition, authentication, and software that knows whats in your photos and videos. Apple’s newest version of Photos app will have facial recognition to create virtual albums of people that users frequently take photos with or of. It also utilizes the ability to search using key words. Federighi said those features are powered by deep learning, which utilizes artificial intelligence.

Craig Federighi, senior vice president of software engineering at Apple, emphasized that the company uses machine-learning algorithms to be able to provide certain features such as for photos only within a users iPhone not on Apple’s servers. “We believe you should have great features and great privacy,” said Federighi.  “When it comes to performing advanced deep learning and intelligence of your data, we’re doing it on your device, keeping your personal data under your control.”

 

Goodbye Siri, Welcome Viv.

Dag Kittlaus and Adam Cheyer the creators of Siri, Apple’s digital assistant, have created the next generation artificial intelligence (AI) assistant called Viv.

Kittlaus and Cheyer have been developing this AI for the last four years and today, May 9th 2016, they debuted Viv for everyone to see. Viv functions differently by connecting to various services, at Disrupt NYC the AI handled multiple complex requests by connecting with third-party merchants to buy goods and even book reservations for its user. Viv will also be capable of learning and teach itself, allowing for more personalization and growth through more usage.

Viv is considered the next gen Siri though Viv looks and behaves entirely different. With Siri you had queries sent off to a search engine to display results and requests, and could only handle a small number of tasks. Viv behaves more like Amazon’s Alexa by connecting to different platforms, devices, and services. Viv is in an essence more social than Siri, it also handles natural language comprehension better than Siri.

Currently Facebook and Microsoft have their own AI that handle multiple complex requests and can connect to third party services. Microsoft recently announced their bot engine and showed Cortana connecting with different apps and Facebook has their open platform that allows developers to build bots for Messenger. Though Viv feels more like Hound. Hound is more of a underground AI that hasn’t taken off with general consumers. It has an incredible ability to understand natural language and boasts its dominance through speedy results.

Viv’s creators also showed “Dynamic Program Generation,” which shows the codes used by Viv after a verbal request and giving a view into how it understands and handles requests.

Viv and other AI like it could change the way we interact with services. In the future you may no longer be required to call or look online to purchase good or services, you could just tell your AI what you want and it could do it for you. Ordering food, booking flights, rooms, or renting cars may all be done through AI assistants.

South Korea to invest in AI

South Korea announced that it would invest 1 trillion won ($863 million) into AI (Artificial Intelligence) research for the next 5 years, as a response to Google DeepMind’s AlphaGo beating Go champion Lee Se-dol 4-1.

The defeat began a new stage for not only the advancement for AI but the recognition that AI isn’t a far off futuristic vision, its something that’s changing our lives and our world today. AlphaGo proved its capabilities and now has convinced South Korea that AI is a real investment that it needs to research on. Originally there were plans to begin AI development in 2017 but now plans have changed.

On March 17th the South Korean President Park Geun-hye stated that AI could be a good thing, it would be the fourth revolution, and announced the research and development will be overseen by a council.

Science Ministry Kim Yong-Soo stated that while the exact date is not yet determined it is underway and the institute that oversees it will be located in Pangyo.

 

AlphaGo is setting a new stage for the future of AI

It truly is a great time for artificial intelligence (AI) and more so for us as we have front row seats into the beginnings of the advancement of AI. On March 9th, 2016 Googles DeepMind AlphaGo program beat South Korea’s Lee Se-dol in Go, one of the most complex strategy board games in our world.

AlphaGo is a computer program that was developed by DeepMind, acquired by Google in 2014, to be able to play Go. It became the first Go AI to beat a professional human in 2015.

The game of Go was created in China more than two thousand years ago. The rules of the game allow players to take turns to place black or white stones on a board in an attempt to capture the opponent’s stones or take over territory, empty space, for points. It is regarded as an immensely complex game and according to Google Go has more possible positions than the number of atoms in the universe.

On March 9th, 2016, AI took a step forward when AlphaGo defeated Lee Se-dol, a 9th dan rank South Korean professional Go player considered one of the worlds best players. At the age of 18 Lee became the second best go player, internationally, and has been world champion 18 times. He has 5 games against AlphaGo with a $1 million winning prize. Unfortunately on March 10th he lost his second match against AlphaGo. AlphaGo also won against the European champion Fan Hui, including 499 other matches.

According to the team behind AlphaGo, the program learns by watching other players and learns different patterns, even being able to understand which patterns are good and bad. While games such as chess and checkers are considered simple Go is far more complicated. Many professionals predicted it would take some 10 years for advancement of this kind, so it came as a surprise when AlphaGo beat Lee. Mastering Go has been a challenge for AI, FaceBook is also working on an AI that can play Go, according to Mark Zuckerberg its getting close to being top human players.

Its ability to learn and improve by watching other players, learning from mistakes, and analyzing millions of possible movies sets a new stage for AI.