On Tuesday, September 6, 2016, Northrop Grumman Chairman, Chief Executive Officer and President Wes Bush addressed Kansas State University as part of their Landon Lecture Series. Below are his remarks.

The Exciting Future of Autonomous Systems 

Thank you, Dick.

I’ve known and admired General Myers for several decades. He serves as my reference point for the phrase “Great American.” KSU is fortunate to benefit from his leadership.

I’m delighted to be with you this evening. And I’m delighted to be back on this beautiful campus. I had the pleasure of speaking with your engineering department a little over three years ago. During both visits I have been struck by how much purple I see. I have three kids and I have done a lot of touring of universities and every university has its school colors, but often a visitor needs to be told what they are. On this campus, it’s pretty obvious.

Wes Landon Collegian
Northrop Grumman Chairman, Chief Executive Officer and President Wes Bush speaks about the future of Autonomous Systems at Kansas State University as part of their Landon Lecture Series.

Another reason I’m so pleased to be here is because speaking at the Landon Lecture Series is such an honor. I really appreciate your invitation to speak during this, the fiftieth year of this series.

Now, I’m an engineer by training and profession. So I have a passion for the positive impact that technology can have on our world. I’m privileged to work at what I believe is the most innovative and forward-leaning technology company in the aerospace and defense industry, Northrop Grumman.

In my career, I have seen incredible technological advances. I think the record is pretty clear that our nation’s defense community has contributed many innovations that have made our lives better in domains beyond security. It’s a long list: Telecommunications satellites; global positioning – or GPS; the Internet, of course; and even advanced medical prosthetics are a few examples.

When you combine the technologies that have come out of the defense industry with technologies from other industries – and from great universities like KSU – these technologies, and many others, have made incalculable contributions to the advancement of the human condition:

  • The mitigation of poverty through the creation of whole new industries;
  • The expansion of healthcare access to people who, a few short years ago, would have had little chance for it;

And perhaps most impactful of all – at least as the betterment of mankind is concerned – the democratization of knowledge by placing it at the fingertips of anyone with a smart phone or laptop almost anywhere around the globe;

Our world has an amazing innovation ecosystem. And that ecosystem includes great universities, such as KSU. Universities have played an absolutely critical role in the development of these technologies. And they will continue to be a core part in enabling this progress for humankind.

These advances are built on a wide variety of technologies, and the integration of these technologies into systems. It will surprise none of you that a key thread common to many of these miraculous advances is computing power. We take the yearly increase of computing capability for granted. And, if you are like me, you often wonder what the next great revolution will be that computing power will enable.

One direction that we are moving in – a direction that will perhaps have one of the greatest impacts on our lives and our world since the computer itself – is the advancement of autonomous systems.

Some call this robotics. But I refer to these technologies as “autonomous” because the term “robotics” has come to be misunderstood. Many people confuse systems that are remotely controlled with robots. But true robots are not remotely controlled. Autonomous systems are able to perform their tasks without the ongoing connection to humans.

And those tasks are becoming more and more complex; more and more indispensable, and more and more available to people every year. This is why this topic is so interesting. And why it’s so important.

Apart from computing power, there have been several other developments that set the stage for this new age of autonomy. GPS is certainly one.

Autonomous systems would have very limited utility without an ability to keep track of their locations in at least two dimensions – that is to say, left and right and forward and backward. In the early days of GPS, its precision was measured in feet. Today it is measured in inches – or sometimes even less – and that capability has proven instrumental in the advance of autonomy.

GPS, and other precision-location technology, is largely what will allow these systems to move from the confines of the factory floor out into the real world affording humans immense new benefit. We can already see it happening. Think of agricultural equipment, such as unmanned agricultural harvesters or, in the world I work in, pilotless aircraft. 

Another critical advance is the explosive growth of sensor technologies and, concurrently, the miniaturization of those sensors. Today, there are sensors to tell you if the pipes in your home are freezing; if your tires are getting too bald; if your farm land is ready for planting; or, if your autonomous system is straying from its assigned duties. A biosensor that you swallow can send its data to the gastroenterologist monitoring his computer screen from any place in the world.

And because they are so small, a sensor for every conceivable function or eventuality can be inexpensively connected into just about any autonomous system. 

The miniaturization of sensors reflects the miniaturization of electronics in general. In one lifetime, we have moved from vacuum tubes to highly integrated circuit components that are nearly microscopic. This has made possible the creation of innumerable systems and made innumerable others practical. And then of course, computing power, which I have mentioned already.

But increasing computing power is simply not adequate by itself to enable what we see on the horizon. If we drill down even further to get at the real potential of autonomous systems we can see why. The real potential transcends simple autonomy and lies instead in cognitive autonomy.

These are autonomous systems that operate using the actions we would expect from the judgment and ultimately, the ethics of a human being performing the same function. Let me give you a sense of how enormous the development challenge is for cognitive autonomy.

As you know, there are many companies here in the U.S. and around the world that are working to develop driverless cars. The general public might assume that it’s not that tough a problem. You might think that GPS, combined with the right sensors and control standards, will keep the car on course, and will ensure that it doesn’t hit the car in front of it.

Sounds simple.

But let’s think about the real world. Let’s say a person runs out in front of that car. And let’s say that car is moving too fast to stop in time. Now the car needs to figure out which direction to swerve to avoid hitting the person.  And let’s say that if the car swerves to the right it risks hitting other people on the sidewalk. But if it swerves to the left, it risks hitting on-coming traffic.

The systems controlling that car need to prioritize the risks, evaluate the potential harm of each option, and act on those evaluations. These systems need to do it instantly, and the results must be at least as good every time as one would expect from the best human driver. And hopefully, the results would be better.

However you want to think about it, the actions of an autonomous car must reflect the same concern for human life as a human driver. That’s cognitive autonomy. And ideally, such an autonomous vehicle would be able to act without a human’s judgment lapses or execution inadequacies.

Now scale up the computing power needed for that self-driving car operating in two dimensions, and lets add a third dimension – up and down. Imagine how much computing power is necessary for the safe operation of a pilotless airliner with several hundred passengers aboard. That’s a much bigger problem than can be solved by a GPS receiver and a set of sensors, as critical to the solution as they are.  But this is where this revolution begins, because it’s a much bigger challenge than can be solved by computing power alone.

So, before I go any further, let me address a misconception – an understandable one, but a misconception nonetheless.

Many non-technologists often presume that technology is constantly progressing with analytical continuity, where future results simply build on the results of the past. We all understand Moore’s Law – that processing speeds have doubled every eighteen months or so.  And I think this is one reason why so many take technology’s progress for granted. It is easy to presume that any computing-based problem can be solved if we are patient enough to wait for Moore’s Law to catch up to our ambitions.

But the development of cognitive autonomy is a different animal. It isn’t just about the progression of hardware capabilities, so the necessary breakthrough could not come from the simple advancement of computing power. Something else is required – something that would allow a machine to learn. That something turns out to be algorithms.

Now, an algorithm is a very simple thing in concept. It would appear to be nothing more than a set of rules designed to allow a computer to solve a specified problem. But today’s algorithms need to learn, and apply that learning to the solving of unexpected problems. Think back to our driverless car example. Real operational environments present the inherent randomness of the real world. Those algorithms must to be able to deal with unpredictability. That is not a linear progression of technology – that’s what we call a breakthrough.

Well, that breakthrough has been made. Let me tell you about the X-47B. The “X” stands for experimental. This is a pilotless, unmanned aircraft flying on and off aircraft carriers, refueling in flight, and performing many other functions as well. Like the driverless car, the cognitive systems on this autonomous aircraft ultimately need to operate with judgment and reliability far better than the best conceivable human pilot.

But unlike the car, this aircraft must perform in three dimensions;

  • ­       at altitudes of tens of thousands of feet;
  • ­       at velocities of hundreds of miles per hour;
  • ­       in highly variable and adverse weather conditions, both in the air and maneuvering on the carrier deck;
  • ­       and in hostile environments where the enemy is trying to shoot it down, or jam it, or hack into it;

And it must be able to do those things on its own, thousands of miles away from its launch point. And one last wrinkle, just to make it even more interesting. This aircraft has to do it all without a tail, because if it has a tail, it won’t be stealthy. And for those of you who know about airplanes, when you take that tail away, it makes it unstable, so it is even more challenging to fly.

There are simply too many variables to actively program into the software. Only a machine that can learn – that can deal on its own with the unexpected – can meet these types of requirements.

Landing is especially challenging. Two bodies – the carrier deck and the aircraft – each moving in three dimensions and each with unpredictable movements created by the sea and the air. But its algorithms must enable it to do so with high reliability.

This isn’t just a dream; It’s working. This amazing aircraft is a reality. Its first take-off and landing on the deck of an aircraft carrier occurred in 2013. It was a momentous step forward for autonomous systems. And it has continued to fly with extraordinary reliability. In fact, one telling measure of its reliability is this: On landing, it touches down on precisely the same point on that pitching flight deck, time and time again; so reliably that the Navy has asked us to program some randomness into its landing performance to keep that part of the flight deck from prematurely wearing out. And it does it night or day, in good weather or bad, high seas or calm.

Last year, the X-47B managed a successful air-to-air refueling from a manned aircraft. In that achievement, it had to compensate for three bodies moving in three dimensions – the X-47 itself; the manned refueling aircraft; and the refueling basket at the end of a long flexible hose, which the X-47’s refueling probe had to engage. Adding that third body to the equation complicated the challenge by orders of magnitude.

The cognitive systems controlling it need to be able to prioritize the importance of the particular mission – which is variable – against the risks to the human beings aboard the manned refueling aircraft. Those systems need to factor in…

  • ­       the roughness of the air;
  • ­       the time that the refueling aircraft can spend on station trying to complete the procedure;
  • ­       threats to the human flight crew from whatever dangers are present;

And a whole host of other factors, which are changing from moment to moment. And with each change, the priorities of the other factors are affected in a non-stop domino effect. The challenge was immense and mere computing power was not enough. Only a machine that could learn could do it.

But defense and national security uses of this technology are very important, and we as a company are focusing a lot of effort on this. But, quite frankly, I think those applications are small in scope relative to their potential to improve the human condition. Recall that the first use of rockets was as weapons. Now we use them to explore the solar system and the universe beyond.

Autonomous systems, too, have a larger and more impactful future outside of the defense arena than they do inside it. You can get an inkling of this by looking at their uses in advanced manufacturing.

There are two things to know about modern manufacturing. First, is that the modern factory floor often looks more like a laboratory than the kind of traditional automobile factory we probably envision. And second, the autonomous systems being used in modern manufacturing are less and less executing repetitive tasks and more and more collaborating with human workers. They are still assembling large, industrial objects like cars and engines. But they are also precise enough to assemble small electronics that could fit on a pin-head. They are more and more becoming cognitive such that you may see a human worker moving the machines arms through a task sequence, teaching it what it needs to do, and then it learns, and then it improves on it.

They are also getting lighter. One auto maker in Europe is using units that weigh less than 70 pounds. That combination of light weight and cognition affords them such versatility that they can shift from one task to another in different locations on the factory floor. It also reduces their costs and brings great economy to the operation. The three thousand pound, multi-million dollar giants bolted to the floor will become fewer and fewer.

What are the implications? Reduced manufacturing and labor costs; Lighter and more versatile systems every year; together this means that smaller manufacturers will eventually be able to compete with the traditional giants of industrial manufacturing. This could unleash an ocean of human innovation without the enormous capital investments that keep so many good ideas from ever seeing the light of day. It could also mean the reduced premium on low cost labor, which some analysts believe could bring many manufacturing jobs back to the U.S. These would be different jobs – high-tech and leveraging knowledge.

These same advantages are spilling over into other areas as well. Agricultural harvesting is becoming more and more automated, reducing farm labor costs and increasing efficiency. Medical automation could save enormous man-hours spent on more menial tasks, allowing those professionals more time doing what they were trained to do – spend time with patients. It could then monitor patients and notify doctors and nurses if they need to intervene. And of course, the uses of autonomy in transportation are highly anticipated.

These are some of the near-term applications of this technology. The longer term applications are much more difficult to wrap our brains around because the potential is just too vast and varied to foretell. But I can tell you that, for me, the idea of autonomous systems dispatched to Mars to build research bases that are up and running and safely ready to receive human occupants upon their arrival, is far more exciting than the prospect of being able to read my e-mails or watch TV in my driverless car during my morning commute. That would be fun too, but there are bigger things I think we could do.

Now, I know that many fear the constant march of technology; and the idea of autonomous machines that can learn may sound frightening to some people. But here is the reality: Cognitive autonomy is a genie that is well out of the bottle. It cannot be ignored. How we choose to embrace, adapt and manage it will determine much of our future – our future prosperity, security, knowledge, and human progress.

There will be setbacks and growing pains. But there always are in any endeavor. The airmail service was established in 1918. It ultimately established 24-hour service, coast to coast, along an air highway of lighted beacons. It pioneered air navigation and all-weather flying. It was the parent of today’s airline industry as well as our national weather service. Yet, of the first forty pilots hired by the airmail service, only 9 were still alive two years later. Several years beyond that, however, pilot fatalities were few and far between.

How we deal with the inevitable setbacks will impact the path forward. And societal acceptance of cognitive autonomy may turn out to be the pacing factor in its adoption and growth. As an example, despite the six million auto accidents per year in the U.S. alone, resulting in 35,000 deaths and two million injuries, with an estimated 90% being the result of human error, we can all safely bet that driverless technology will be ready and available long before society is ready to embrace it.

Driverless cars will be able to cut the safe distance between moving vehicles from a matter of many yards, depending on your speed, to just inches at any speed. Imagine what that would do for highway crowding alone. But it would also likely require the separation of driverless vehicles from those with drivers, with human drivers perhaps feeling like second-class citizens, at least initially.

The point is that societal acceptance of new technology almost always lags behind its pace of development. And because politics – at least in a democracy – always happens downstream of innovation and culture, our efforts at legal and policy accommodation is necessary, but not sufficient if this technology is to realize its potential. Frankly, this is not all bad, although we need to ensure we don’t impair our progress relative to that of other innovative nations.

Currently, the only players in these efforts to socialize machines that can learn are technologists at one end, and popular culture – books, movies, and television – at the other. And the mass media’s vision of this technology is almost universally dark. Anyone who has ever watched a “Terminator” movie knows what I’m talking about.

We need a conversation among people who occupy the vast space between engineers and Hollywood. And this is where Kansas State University comes in. Not just KSU, of course, but all universities and colleges. Institutions like this have been vital in contributing to the development of this technology. And of course, one of the primary roles of places like KSU is as creative disruptors. Traditionally, you do this by creating synergies that wouldn’t exist if left to develop one by one. In this manner, institutions like KSU help build the intersections between technologies. When universities do it right, the fears and reservations associated with new technologies are calmed and their potential is socialized and made welcome in our lives. And the result can be the advance of the human condition. When this function is neglected in a new technology, tremendous potential can be wasted and the opportunity costs can be enormous – even tragically so. The unused potential of genetically modified foods is a good example.

In my view, what is needed is for the void between technologists and popular culture to be filled with other voices – social scientists, historians, legal scholars, ethicists, botanists and agriculturalists, economists, theologians, medical specialists, astronomers, citizens, and anyone else who has something thoughtful to offer on this challenge that we face.

Questions need to be debated: What should these machines be allowed to do or not do? What restrictions or parameters should be engineered into them? How do we know they are engineered right? There are a host of other questions and we need a lot of good thinking. Papers and articles need to be published to provoke this thinking. Forums need to be conducted and thoughtful leadership – including at the political level – needs to be engaged. But like it or not, this genie is not going back into the bottle. I’m not worried by that as long as we recognize the risks inherent in any new technology and proactively manage them, rather than try to ignore or avoid them.

We are almost to a point where it is easier to include the algorithms necessary to allow a machine to learn, than it is to attempt to program an action for every contingency. That represents a tipping point; a line of demarcation beyond which it makes little sense to not pursue machine learning.

I think there’s a lot riding on this moment. Personally, I am very excited by it. But I am also aware that this is a global vector. The U.S. does not have a lock on this technology. How we choose to manage the adoption of cognitive autonomy will impact our global standing for generations to come.

To my mind, this is a logical follow-on to the information revolution, which has made almost infinite amounts of knowledge available to virtually everyone in any language; and which has spawned uncountable dreams, visions, and ideas. But nothing of practical use to mankind was ever made fully real by an idea. It was only created by actions taken. Those actions might have been inspired by ideas, but it was the actions themselves that affected the change.

Yes, this technology stands to enable enormous progress like the exploration of the universe. But it also stands to allow people to take the knowledge afforded them by the information age, with all the dreams, visions and aspirations it inspires, and translate it into the actions necessary to benefit all of us. It’s hard to imagine something more exciting than that. And speaking here at KSU, at an institution with so many thoughtful people, I can’t wait to see the role that you and other universities will play in having us realize this great potential.

Thank you for having me here this evening.

The End