Meet Dr. Ayanna Howard: Roboticist, AI Scientist, and Old School #Blerd

Meet Dr. Ayanna Howard: Roboticist, AI Scientist, and Old School #Blerd


At a CES 2016 panel on AI, there was a discussion that AI is moving away from “science project” territory and becoming something with more practical, real-world application? Do you agree?
I do. Although, I don’t think people realize it. For example, if you use your phone and you use Siri and you are always asking for a new Thai restaurant in Atlanta, eventually it learns, “This person is not interested in Chinese or Soul food.” We don’t even think about it – it learns as you use it. If you go to Google and you search on different machines you get different results.

There was also the discussion that the use of the term “artificial” is outdated and not quite accurate … that there needs to be a new way to think about AI. Your thoughts on that?
I never use the term “AI.” I use the term “humanized intelligence.” The whole aspect of intelligence is that learning is done in the context of people. It’s our environment. We are using these systems to enhance our quality of life. We would not be happy with an artificial system that does stuff that might be optimal but not in the way we do things.

What do you see as the difference between business uses of AI versus consumer uses?
I see at least in the startup space, a lot of the companies getting investments are in the data-mining space. Look at Netflix – that’s enterprise learning people’s preferences – [to] deliver ideal content. Machines really help out our own quality of life, on the consumer side. For me, it’s my own personal preference – individuals listen to one song [for example] and then with preferences, the next time [you sign-in] you get better [selections].

It’s almost kind of scary … once you use these learning apps they are pretty good at “getting it” in a short time. Algorithms are getting better, and of course there is more data.

Steven Hawking, Bill Gates, and other tech leaders wrote a letter about the danger of AI after the military announced it was funding research to develop these autonomous, self-aware robot soldiers. Hawking wrote,”humans, limited by slow biological evolution,” couldn’t compete and would be superseded by AI and that AI could be more dangerous than nuclear weapons. What are your thoughts on this?
I am on the other side of the camp. [AI] is no more dangerous than any other kind of tech. If I give you anything, there is a good and a bad; by nature we have good people and bad people. You can’t stop that.

The problem is if you say ‘no,’ the good people cannot work on it. All you have in society are those who are not following the rules – just creating the bad. So then, we are destined to go down the path we don’t want to go down. We can create tech that is good and has social impact.

What will AI be like in 20 years?
I do predict that it will just be “programs” not called “intelligence.” I see learning intelligence algorithms integrated in any tech you can think of; appliances, cars, phones, to our education system. And I also see it integrated into hardware; into robotics, trains – physical things … as well as [continued integration] into smart homes.

Finally, do you see, especially as a professor, progress in the numbers of minorities in STEM studies or careers?
I do, but [there’s] a caveat. It’s better – and although I see an increase in the number of females and minorities, it still doesn’t reflect the demographics, there is still that gap. Is it widening if you include the world’s demographics? Yes, that gap might be widening, but if I look at year-to-year increase, it’s better.


×