NUS Tech Feature – Dr. Harold Soh

In our first NUS Tech feature, we speak to Dr. Harold Soh, Assistant Professor at the National University of Singapore(NUS) School of Computing, to learn about his work on human-AI/robot collaboration. Read on to find out if robots will eventually take over the world!

Prof. Harold Sim (L) and Alvina (R) at The Robot Living Studio, located in the NUS School of Computing.

What do you do?

I’m interested in enabling intelligent machines and human beings to work together and solve problems. This could be something like helping children with disabilities get around or help people design new objects, or even putting IKEA furniture together. There are many applications where both human beings and intelligent machines can come together to solve problems more effectively.

Back in the U.K, your team designed a smart paediatric wheelchair that won the runner-up prize for the UK James Dyson Award. What is special about this wheelchair?

Our smart paediatric wheelchair was designed to help children with disabilities get around safely but one thing we were very careful about was not to take control away from the children. It is not like an autonomous car where it just takes you from point A to point B. Developing children need a sense of control and they need to make mistakes. However, we would like to avoid very dangerous situations. You want to provide some amount of safety but let them explore, which this wheelchair allows them to do by using a form of shared control; there’s onboard intelligence that shares control of the wheelchair with the child.

The Assistive Robotic Transport for Youngsters (ARTY) smart pediatric wheelchair.

How does this wheelchair work exactly?

You can think of the wheelchair as a miniaturised self-driving car. There are sensors and lidars (laser-based sensors) that allows the wheelchair to perceive its environment. There are also sonars and bumpers. It has a joystick, which reads in information when the child wants to move. It is taking all that data together, from the sensors to the joystick as the user is deciding where to go next. Ideally, it would go in whatever direction the user wants to go but occasionally it’s going to veer off to the left or right, depending on how dangerous the situation is.

Has the wheelchair been tested out?

Yes, we did! We tested it out with able-bodied children and also with two children with disabilities at a physiotherapy centre in the UK.

Do you think robots will take over the world?

(Laughs) Not yet, at least not for the near future. Unlike the robots in Sci-fi, our robots are really not that capable yet. We are a very long way from anything like the Terminator. What I foresee in the future is a symbiotic relationship between the two, where human beings and robots work together to solve problems more effectively than what we can do today. That’s the future that I’m hoping to help engineer.

What do you think are the most pressing problems that robots can help us to solve?

We can purpose-build robots for dangerous tasks. I think those are the things that we want to automate. The stuff that we don’t want to do and robots can help us there. For example, jobs that are not so pleasant such as cleaning sewage waste or hazardous nuclear material. There are also things that we want to do but we would like robots to help us to do those things more effectively. Imagine if you had an intelligent AI assistant to help you focus on your core tasks by managing your schedule or proofreading your work. Also, machines don’t have the same cognitive biases or blind spots that we have so, they can help us make better decisions. Of course, they have their own faults, but that is precisely why I believe a human-machine team is an ideal partnership.

Going back to your research using Artificial Intelligence, what do you do as a researcher? Behind all the big words and jargons, what does that actually mean?

My work involves developing new methods and algorithms but we try to ground that using real-world tasks with human subjects. My work is a little bit different from standard machine learning as we do human subject experiments. After all the math and the computational experiments, we bring humans to the lab and get them to do tasks with a robot or AI in order to study how the whole system performs. We do studies to understand the science behind human-robot/AI interaction and test new algorithms. In this way, the research progresses.

What are the major challenges you face while doing your research?

For now, I think there are two major challenges. One is what I said earlier, robots are not that capable. Our robot technology is still quite brittle, both in terms of software and hardware. They break all the time so, we spend  time fixing the robots and that is quite painful sometimes because you want to get on with the experiment, but you need to stop to repair the robots. The second challenge is that people are complex. We always have a hypothesis about what people might do when they participate in our experiments but there’ll always be individuals who surprise us. They do something completely different; They just go off the rails and try something else! It’s challenging sometimes, but also the most exciting part of my research.

Do see your robots being commercialised in the future?

There are already companies which are already trying to commercialise robots and related technologies. For example, something that people are trying now is Robot Learning by Demonstration; basically people teaching robots to do things. Getting humans and machines to work together is an old area actually, so you might be familiar with this field called human factors. Human factors engineers spend a lot of their time studying how people and machines work together, for example, how pilots fly planes and how to minimise errors. I think nowadays the key difference is that we have robots that are more autonomous and can take on more tasks. They are more intelligent and are capable of a wider range of tasks. We may start to see some of these ideas being commercialised in yet unknown industries.

How can we at NUS Enterprise help you? Any call-to-actions?

I am always contactable through email so you can just send me an email or go to my website, at or I am mostly interested in hearing what real problems people are facing and what their pain points are. How can we help them take away some of that pain using machine learning (ML),  artificial intelligence (AI) and robots? Not replace humans but really to help them solve their problems. I’m also interested in getting subjects! Getting real-world people to come in and perform tasks, that would be fun. If anyone is interested in getting involved, just drop me a line.

Watch the full video interview with Professor Harold Soh here:

For startup updates straight to your inbox, subscribe to our weekly newsletter here.