Explore
Oasis_KrASIA
Innovation

Liu Feng-Yuan on AI governance: don’t view innovation and governance as oppositions

Written by Emily Fang Published on 

Share
Feng-Yuan highlights the repercussions of innovating without a policy in mind.

Liu Feng-Yuan is the Co-Founder and CEO of BasisAI. Prior to this he spent many years harnessing data science and AI for the public good as part of Singapore’s Smart Nation initiative. You can read BasisAI’s blog on responsible AI here.

This interview has been edited for brevity and clarity. 

KrASIA (Kr): How does policy catch up with the always evolving technology? 

Liu Feng-Yuan (LFY): Regarding your statement on the disjunct between policy and technology, I think it’s very real–I spent my early career in the public sector and public policy. 

When I think about legislation, lawyers are the ones drawing up the rules, not engineers because they’re writing code. When I was part of the Smart Nation office, we started to hire software engineers and data scientists to write code; they were public servants. When a policymaker or lawyer writes a rule, often an engineer looks at it and goes,  ‘I can’t implement that. It’s not that I don’t want to, but I don’t even know what that means. This rule could just as well be described in a foreign language.’

I think a lot of what we’re doing is looking at the regulations around AI governance. The Singapore government, together with the World Economic Forum has released a set of principles on AI governance. What Bedrock [from BasisAI] tries to do is help engineers and data scientists have the right tools, and do the right thing with positive constraints. We don’t want responsible AI to just be about checking a box. We help provide data scientists who are building AI models with a paved road, so they can do their work better and faster. If you want to do the wrong thing, you can walk off the cliff, but if you want to do the right thing, let us empower you to do the right and responsible thing. 

How does this work in Bedrock? For every AI application that a scientist builds, we will audit and vet it for how it treats gender or ethnicity.  It will look at whether this model is treating men and women differently. If it does, and that breaches rules, then the software won’t get deployed live. 

The individuals and the leaders within your organization will have to make that call, and give you the tools to make stakeholders aware about the implications of your choice of AI model. The key is to provide better visibility into the trade offs. That’s what I mean by bridging between policy and tech–our technology is trying to bridge that gap.

Kr: In many tech hubs, there is a rather big disconnect when it comes to policymakers and technologists. In Singapore, it seems like they’ve been intentional on hiring more technologists in the public sector. 

LFY: As a company, you’ve got to think about your existing customers and your brand and think about doing the responsible thing from the beginning rather than as an afterthought. And I see an increasing “techlash” arising from a mistrust of technology companies. Look at the evolution of social media. It is undoubtedly powerful technology that at some points in history helped in the fight for freedom for people in oppressive regimes. But it has also now become an avenue for misinformation. 

There are longer term implications, which creates a danger. There needs to be a balance of values if you want to run disruptive technology. Sometimes you do need to move fast and break things, but when you are handling data and personal information, you need to shift the balance in favor of being more careful and responsible.

The team at BasisAI in Singapore.
The team at BasisAI in Singapore. Courtesy of BasisAI.

Kr: What are some examples you’ve seen where regulation has failed to control AI and technology? 

LFY: Let’s look at the situation with Boeing and the 737 Max, where the autopilot systems went wrong. These are complex software systems and are arguably artificial intelligence. They’ve been flying in the air for many years. The industry has regulators, but it’s highly technical and not always easy for regulators to see what’s going on. Over time, presumably, incentives got misaligned.

That is a fascinating example of how human beings are prepared to put their lives at the hands of complex software systems. We put trust in aviation systems, and most of the good landings and takeoffs are all handled by a computer. There was so much trust lost in this entire edifice that was built up. You can use that analogy as a failure of regulation. 

To some extent, with all new technologies, there’s always going to be some friction with regulation. Often, regulators never envisaged how technology would be used. But over time, regulations adapt and evolve. This also happened with online marketplaces and ride hailing, when they first came about.

One of the big debates when online marketplaces were established was: who was accountable if there was a poor quality product on the marketplace? Would the marketplace accept responsibility? Or would they just say they’re an intermediary? It’s the same with ride hailing and ride sharing, with the likes of Uber, Lyft and Grab. Are they employers or contractors? Regulations have had to evolve. 

Kr: How do you find the best intersection of innovation and moving fast, but also within the parameters of regulation? 

LFY: This is where most people believe innovation and governance act in opposition. Many believe the only path to innovation is to move quickly. They believe governments always have more rules for you to check the boxes. In lots of segments, that’s the case, but there are also ways to do things more productively. 

Let me give you an example: if you’re collaborating with Google, you have a collaborative document where they track different versions, and with Git, software engineers can change, edit, and collaborate on code.  That is good governance because you can trace changes to the code and see the whole history of the codebase, and then back to the individual that made that code change. 

With this I know the provenance of how the software has evolved, I can fix things, I can roll back and have visibility and auditability. It accelerates the way the team works, because the worst thing for me is to make a change, and someone else made another change that didn’t take into account my changes, and we’re not aligned. Having good version control in a tool or software, is incredibly powerful, both for productivity, innovation, as well as governance. We have the same approach with Bedrock. We don’t just check the boxes  – we’re here to give you tools to do your work better. And some of these tools have the appropriate safeguards put in place.

Bedrock is an end-to-end machine learning platform that orchestrates the prototype to production cycle in minutes, not months.
Bedrock is an end-to-end machine learning platform that orchestrates the prototype to production cycle in minutes, not months. Courtesy of BasisAI.

Kr: One of my initial questions was going to be, ‘can you explain what you guys do for AI dummies?’ but you basically just did that for me. So Bedrock is like the end-to-end platform that helps track changes that both policymakers and technologists can use and work together when it comes to deployment. 

LFY: Think about it like an operating system. Various applications, such as Zoom, Minesweeper, and Microsoft Word, are applications on top of an operating system, which could be iOS, Android, or Windows. What the system does is make it easier to develop applications. 

If I’m developing a new app for the iPhone, the fact that they’ve got an operating system means I don’t need to build it from scratch. It also allows me to deploy the app and make sure it doesn’t crash. If it crashes, it will automatically reboot. That’s what the operating system does. At the same time, the operating system puts in place governance. This app won’t have permissions to my camera and microphone, and it’ll stop the app from taking data from another app. 

That’s the best analogy of what Bedrock does. It’s an operating system that both accelerates the development of AI applications, but also puts in place the right governance so that people can develop AI applications and use them more safely. All technology can do is to encourage you to do the right thing. As I mentioned, we’re providing a paved road to developing AI solutions and more responsibility to the individual developing it. That’s really the gist of it.

Liu Feng-Yuan is the Co-Founder and CEO of BasisAI.
Liu Feng-Yuan is the Co-Founder and CEO of BasisAI. Photo courtesy of BasisAI.

Kr: Why was being in this AI space so attractive to you? 

LFL: When I was in the smart nation office in Singapore, we had this really amazing multidisciplinary team that included policymakers, data scientists, big data engineers, and we were developing data science for the public good. That was incredibly exciting because you could see how using data and AI allows us to be much more empirical.

AI is about taking data – both structured and unstructured data – images, speech, text, and then making decisions based on that data. I saw a huge potential to scale that outside the public sector and outside of Singapore. When I met my cofounders, I realized that what they were doing in Silicon Valley was easily 10 years ahead of what I’d seen in the industry here. 

We saw a huge potential to really adopt AI, but to do it differently. Perhaps because of my previous background, I wanted to do it in a more responsible way, in a way that had the right safeguards and that doesn’t impede the speed of innovation. We collectively felt that was really important.

Kr: Where do you project the AI space is heading towards? 

LFY: There’s a lot of hype and excitement over AI, and a lot of companies are starting to realize how they implement it to reap the benefits. It’s going to move past the hype into productivity, and really power a lot of systems. As momentum starts to increase, there’s going to be a greater awareness of risks and concerns around how AI is used. Is it violating people, or ethical and cultural values? What shape or form is it going to take? I think it’s important that we allow AI integration to continue, but at the same time, we haven’t placed the right safeguards. 

We want to help teams do the right thing with the tech, rather than slowing them down, and I see this as an ongoing tension with how AI is being used. There will be some pullback and backlash. For the companies that are able to walk that path, I hope they are able to harness AI, but also gain the trust of the consumers and the regulators. Those that are, are the ones that will be more successful.

Kr: How do global companies conform with different policies when it comes to regulation? 

LFY: At a high level, there’s an emerging consensus around principles. They’re fairly similar in that you can trace the provenance of how AI systems are built, what goes into version control, and tracking of how the algorithm is learning. Another principle is making sure that you can monitor how well the AI works, even in production and when it goes live. Also, it’s making sure that the decisions behind AI are explainable, and that you can audit them for potential biases.

For each country and industry, whether it’s healthcare or financial, there are different nuances around specific regulations that they want to put in place. 

Where we’re going now is starting from the principles through to best practices. 

As the industry evolves, we’re going to have to develop more specific point solutions for different industries in different contexts. If I’m running a reconditioned engine or personalization engine for a bank, the kind of governance for regulatory requirements will be different to that of a medical imaging for a healthcare institution [for chest x rays or other types of scans]. 

It’s still very nascent. In the next three to five years, we’ll see a lot more color and specificity about the types of government regulations. But it’s a really exciting space with lots to still evolve. 

Kr: Where do you want to take Basis AI? 

LFY: We want to be the go-to technology company for any Fortune 500 or ambitious data-driven company that wants to have the right foundation for responsible AI. We want to help our partners transform their organization from the ground up. We want to take the best from Silicon Valley, and bring that to traditional enterprises who realize they need to completely adopt data in the way they make decisions. I want to make sure we’ve got a solid foundation to ensure that customers can continue to trust our partners because their brand is at stake. Those are the two key value propositions that we bring to any company that wants to be more data-driven and is excited about the potential.

WRITTEN BY

Emily Fang

Share

You might like these

  • Innovation

    The future of food: How startups are revolutionizing in the insect farming sector

    By 

    Aalia Shah

    18 Jan 2021    02:23 AM

Editor’s PickEditor’s Pick

  • Founder Nurul Hussain gives insights on challenges with building and sustaining a social enterprise financially.

    Social Impact

    The Founder of The Codette Project, Nurul Hussain, on how to measure the success of a social enterprise

    By Emily Fang

    20 Aug 202003:16 AM

Most Popular

See All