Google to buy Moodstocks, a company that can make your smartphone see

Google announced a deal to buy a French startup called Moodstocks, which creates technology that can allow smartphones to recognize what they are looking at.

Moodstocks has been working on this technology to allow smartphones to be more versatile when taking images of objects or people and having the ability to recognize various objects, essentially as if giving smartphones eyes. The company also works with machine learning, which Google has heavily integrated with its services.

“There’s a lot more to be done to improve machine vision,” Google France tech site lead Vincent Simonet said in a blog post.

Google hopes to integrate Moodstocks technology into its AI development. The company’s research and development center in Paris will work with a team from Moodstocks.

Its interesting to see how far this technology will take Googles services and products. Google announced at their annual developers conference in May, a virtual home assistant called Google Home, a competitor of Amazon Echo. With Google Home being thought of as a next gen assistant, this kind of technology may help devices differentiate between objects and even learn and grow to offer more accommodated services.

We’ll also undoubtedly see this technology in smartphones in the near future. This may be included in the next Android OS and give smartphones more functionality and features, and make them smarter. The bigger question is will this technology be included in the self driving cars Google is developing? Image recognition could do wonders for self driving cars. Instead of just sensors, cars will be able to themselves differentiate between different objects such as stop signs, people, animals, cars, bicycles, and various others.

Smacc deploys AI to automate accounting

Smacc, an accounting and financial platform, has taken to using an AI to automate accounting.

The company, founded by Uli Erxleben, Janosch Novak, and Stefan Korsch, takes receipts submitted by their customers then turns them into machine-readable format, encrypts them, and pushes it to an account.

Their system checks invoices, sales, costs, checking, does all the math, and ensures everything is correct, including VAT-IDs. The system learns and then appropriately automates everything.

Smaac recently secured a $2.5 million Series A round from Cherry Ventures, Deiter von Holtzbrink Ventures, Rocket Internet, Grazia Equity, and angel investors. The company is on its way to provided accurate, secure, and automated accounting using AIs.

DoNotPay has overturned 160,000 parking tickets in London and New York

DoNotPay known as the worlds first robot lawyer has successfully contested 160,000 parking tickets throughout London and New York.

Created by Joshua Browder, DoNotPay, helps users contest parking tickets using a chat like interface. The program determines first if the appeal is possible by asking questions and guiding the users through the appeals process. According to Browder, the free app, has taken on 250,000 cases and have appealed over $4m on parking tickets. Browder has plans to expand DoNotPay to Seattle next.

If you’re in New York or London checkout the website, it may help you.


OpenSource AI

OpenAI, an artificial intelligence non profit, is working on a physical robot for household chores.

OpenAI has a funding of $1bn with with backing by Elon Musk, Sam Altman, Greg Brockman, Jessica Livingston, Reid Hoffman, and Peter Thiel. You might be thinking, why is Elon Musk funding an AI company when he’s hellbent on making sure AI don’t take over the world? Well according to OpenAI’s site they have a different mission in mind, “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The non profit was launched in 2015 as a way to balance the advancements made by other corporations such as Google, Facebook, and Microsoft.

In a blog post  by Altman, Brockton, Musk, and Sutskever, they explain that they don’t want to manufacture the robot, but want to “enable a physical robot…to perform basic housework.”

OpenAI wants to build an AI that can understand natural language, learn by itself, and solve complex problems. They are aiming to build new algorithms to advance the field because as of now AIs are not yet advanced enough. “Today, there are promising algorithms for supervised language tasks such as question answering, syntactic parsing, and machine translation but there aren’t any for more advanced linguistic goals, such as the ability to carry a conversation, the ability to fully understand a document, and the ability to follow complex instructions in natural language,” OpenAI noted.

The company opened Gym Beta, which targets advances in reinforcement learning, and is their step to advancement of the field.

Olli and Watson a new way to travel

IBMs AI platform, Watson, is being integrated with Olli services.

Olli interiorOlli is an electric-powered autonomous vehicle, minibus, that has the capacity to carry up 12 people. It was designed by Local Motors, based in Arizona, using technologies such as 3D printing.
The vehicle will have a unique version of Watson integrated into Ollie to improve passengers experience, according to IBM. Though Watson won’t be navigating or driving the vehicle. “IBM technology, including IBM Watson or IBM Watson IOT technology, does not control, navigate or drive Olli. Rather, the IBM Watson capabilities of Olli will help to improve the passenger experience and allow natural interaction with the vehicle,” the company said.

A few Watson APIs will be integrated with Ollie: Speech to Text and Text to Speech, Natural Language Classifier, and Entity Extraction. Passengers could be able to talk to Ollie and ask for locations of landmarks, attractions, restaurants and much more.

These vehicles are planned to operate in Washington, DC, and then expand to Miami-Date County and Las Vegas.

Automated vehicles will one day change how we drive and most specifically the taxi industry and IBM, one of the worlds largest technology companies, is taking its first steps.  “Olli offers a smart, safe and sustainable transportation solution that is long overdue,” John B. Rogers, co-founder of Local Motors, said in a statement. “Olli with Watson acts as our entry into the world of self-driving vehicles, something we’ve been quietly working on with our co-creative community for the past year. We are now ready to accelerate the adoption of this technology and apply it to nearly every vehicle in our current portfolio and those in the very near future. I’m thrilled to see what our open community will do with the latest in advanced vehicle technology.”

Google is making chatbots with human level speech abilities.

Google is known to work on innovative, futuristic, technology. AI is on of them, though one of their AI projects is more chatty.

At a Singularity conference engineers from Google revealed that they have been working on chatbots. Ray Kurzweil, a computer scientist and futurist, and his team announced that these chatbots will be released later this year.

“That’s very relevant to what I’m doing at Google,” Kurzweil said in the interview. “My team, among other things, is working on chatbots. We expect to release some chatbots you can talk to later this year.”

These chatbots will be human like robots that can converse with you much like a human would. One of the chatbots named Danielle has its speech tailored after a books character. Kurzweil said people could make their own unique chatbot by feeding it samples of writing, this would allow bots to have different types of personalities.

He did go on to say that it would take some time for bots to have human level language abilities, so these bots might not work the way you would want them to, atleast for now. Kurzweil states we’ll have to wait till 2029 when AI will be able to pass the Turing test, which would make them indistinguishable from real people.

Researchers are teaching robot to ‘feel’ and react to pain

Scientists from Leibniz University of Hannover, are programming a robot to feel pain and react to it.

Researchers are developing a robot nervous system that allows robots to understand pain and appropriately respond to it. The system would use a reflex controller to mimic reactions from pain.

The robot reacts to different types of pain from light and moderate to severe, even testing a cup of hot boiled water. Its being trained to react to pain and save themselves from danger. The project was displayed at the IEEE International Conference on Robotics and Automation in Stockholm, Sweden.

Though a very important thing to understand is that the ability to not feel pain allows robots to work in dangerous environments or perform tasks that are too risky for humans. It would also allow robots to assess, understand, and react to threats when near humans.

And while robots won’t essentially feel pain, this would be a direct first step in understanding how to emulate pain. So don’t worry robots aren’t becoming human, right now. As of yet the technology is still very primitive.

Some things to expect at Apples WWDC Conference

Apples biggest conference, WWDC, or Worldwide Developers Conference, is almost here, just next month!

It’ll be the 27th WWDC, it starts on June 13th, very ominous, and ends on June 17th, and will be hosted in San Francisco’s Bill Graham Civic Auditorium. But what can we expect from Apples biggest conference? Lets take a look at all the rumors and predictions.

Apple Music

We can expect to see something for Apple Music, its music service that was launched in 2015. The service did not meet Apples expectations or ours. Apple reportedly has an estimated 13 million subscribers but even then the service can’t compete with Spotify, which has an estimated 30 million subscribers and recently modified its family plan to match Apples. And while the service did grow there were reports and complaints about its user interface and integration issues.

So we might see an upgrade to the service such as new artists, new functionality and features, better and improved user interface, better streaming, and an expansion in its radio service. If Apple wants to get into the top it’ll also have to make a better royalty fee policy. 


This year’s OS X will definitely be showcased at the conference, as Apple recently came out with beta updates for developers. The new beta of OS X El Capitan  includes an update for tvOS and iOS 9.3.3. We might also see an integration of Siri for Mac, much like Cortana on Windows 10. MaRumours recently reported on a possible  Mac dock icon and menu bar icon for Siri. If you want to jump into the beta you download it from the Apple Developer Center, but be careful it’ll be riddled with bugs so use a backup device just to be sure.

iOS 10

ioS 10 will most likely also make an appearance, it’s pretty much expected. We might see user interface changes, performance improvements, new features, and new apps in the App Store. We’ve seen new iOS released at past WWDC conferences so it might follow suit.


WWDC is mostly oriented at software although we have seen hardware showcased at the conference. So don’t be surprised if you see a new Mac. There have been many rumors about the possibility of a new MacBook Pro this year.

Apple Watch

The Apple Watch was released in April of 2015, so we demand to see another release. Well hopefully we’ll see something new. It has tough competition with the Samsung Gear S2, especially now that the company is in the works to introduce iPhone compatibility. There’s also the much coveted Sony Smartwatch 3, Asus ZenWatch 2, Moto 360 2, Huawei Watch, even Garmin and Pebble have smart watches. There are a lot of second generation and 3rd generation smart watches. If Apple wants to compete it’ll have to come out with a new watch that’ll outshine the rest.

WWDC will be a conference to see all kinds of new things from Apple. The conference will be held on June 13 – June 17 and you stream the event live on their WWDC App or at the website.

Google I/O What to expect

Google I/O is Google’s annual developer conference, a three day conference for developers mto get the latest update from Google on the newest innovations that the company has to offer.

Last year we saw Android M, Android Pay, Google Photos, and Google Cardboard, the company’s VR headset. Here’s what we can expect to see this year.


Lets begin with AI for no other reason than the fact that anyone who is interested in AI has kept up with Google for the past few months. Google has done a lot of work with AI and have persuaded developers to work on their open-sourced AI software. Though the most interesting accomplishment happened when Googles DeepMind AlphaGo AI beat world champion Lee Sedol in one of the most complex games in the world, Go. So it’s not surprising that we might see more innovation from Google on the AI frontier.

Android N 

There’s been a lot of buzz about Android N, the next generation of Google’s smartphone and tablet operating system. Earlier this year in March the company released a developer preview. Android N will have new features such as split-app multitasking, increased battery efficiency, and the ability to reply from within notifications. Like last year we can assume the conference will be heavily influenced by Android N and the latest updates.

Nexus 7

Google recently launched Pixel C, its latest tablet replacement for the Nexus 9. Though it hasn’t had the popularity that Google might have been expecting there have been many rumors that Google would bring back the Nexus 7 tablet.

Chrome OS

Google released its cloud-based operating system seven years go and it has had many mixed reviews, though it’s definitely tilting on the more favorable side for many. We might see new updates to the OS and some integration with Android to create a more single environment.


Its been a big year for reality with so many options from Facebook, Samsung, Sony, and HTC that makes it seem as if Google is falling behind with its Cardboard. Though not to worry as a credible rumor has spread indicating that there will be a new version of Android VR shown at the conference. Which means we might also see more on Project Tango, the experimental augmented-reality software that Google has been working on for some time now.



It sounds strange at first but when tech website Recode reported last week that Google was designing a competitor to the Amazon Echo code named “Chirp” heads started to turn. Google is already capable of creating advanced virtual assistants and has had made great efforts in voice-recognition, so it makes it all the more likely that Google would create something that it has the ability to create, especially since Google has been developing inside the Internet of Things realm.

Autonomous Cars

In Google I/O 2015 the company announced that its self-driving cars would be driven on California streets. So far there have been great news for self-driving cars, especially Googles’, except for that one incident where an autonomous car crashed into a bus, which was blamed on human error. Google also posted about a potential partnership with Chysler to build autonomous vehicles. We’re sure to see new updates and news for Googles self-driving car.

There are a lot more things to expect from this years Google I/O including Project Tango, Project ARA, Android Wear, Project FI, an update on Nest, and more. Google I/O begins at 10:00 am Pacific Time on Wednesday May 18, don’t miss it! Look out for the app or head over to their official website to watch live streams.

IBMs AI Ross has been hired at a top law firm

A law firm, Baker & Hostetler, has done something risky yet ground breaking, the law firm has hired IBM’s AI Ross, the first artificially intelligent lawyer.

Ross was created by IBM and was built on the popular Watson. Ross is able to read and understand language, create hypotheses when asked questions, do research, and give back responses with references. Ross also learns from experience, can increase response speeds and the more it is interacted with the more knowledge it gains, something that’s becoming very common in modern day AIs.

The website says, “You ask your questions in plain English, as you would a colleague, and ROSS then reads through the entire body of law and returns a cited answer and topical readings from legislation, case law and secondary sources to get you up-to-speed quickly.”

Baker & Hostetler is one of the biggest law firms in the country and have hired Ross to work on bankruptcy cases. Ross will work as a legal researcher, being able to dig through thousands of legal documents. So while Ross may not be present at court or make legal decisions, the AI will be able to cut the time it takes humans to shift through legal documents.

In addition to Baker & Hostetler there are other firms that are seeking to hire Ross. There are others like Ross out there such as Legal by Lex Machina, which mines public court documents to predict how a judge will rule in particular cases; another is CaseText, which digs through thousands of state and federal legal cases to provide data. There is a trend in creating AI that can help dig through, analyze, and provide relevant information faster than humans can.

This push in AI is already making its way to various markets, not just in tech, therefore its likely that we’ll see a shift in jobs. Take Ross for example, Ross does a faster and one can say will do a better job at analyzing and shifting through legal documents, a job that is often given to recent graduates, which for many can be a stepping stone to better opportunities. While AIs can be very beneficial we are starting to see a shift in job placements, the only problem is that its happening too fast.