| email: joe@joeglantz.com 

INTERVIEW

Dr. Norman I. Badler

By Joseph Glantz

Norman I. Badler is the Rachleff Professor of Computer and Information Science at the University of Pennsylvania. 

He received his BA in Creative Studies Mathematics from the University of California Santa Barbara in 1970, his MSc in Mathematics from the University of Toronto in 1971, and his PhD in Computer Science from the University of Toronto in 1975. 

He served as the Senior Co-Editor for the Journal Graphical Models for 20 years and presently serves on the Editorial Boards of several other Journals including Presence. 

His research involves developing software to acquire, simulate, animate and control 3D computer graphics human body, face, gesture, locomotion, and manual task motions, both individually and for heterogeneous groups.  

He has supervised or co-supervised 62 PhD students, many of whom have become academics or researchers in the movie visual effects and game industries. 

He is the founding Director of the SIG Center for Computer Graphics, the Center for Human Modeling and Simulation, and the ViDi Center for Digital Visualization at Penn. 

He has served Penn as Chair of the Computer & Information Science Department (1990-94) and as the Associate Dean of the School of Engineering and Applied Science (2001-05).

You gave an interesting comment at a recent commencement speech at the College of Creative Studies at the University of California Santa Barbara, the college you also attended. Your speech discussed the motivators for creativity based on six universal expressions: surprise, sadness, disgust, anger, happiness, and fear. 

For you, fear was the strongest motivator for passion and creativity. Passion, because the mind and body must become powerful to break the emotion. Creativity, because responsive actions may not come from rote or typical actions. Creativity is often the impassioned response to fear. So, I'm just wondering whether you can elaborate on that because it's kind of a unique perspective. 

Professor Badler: I was trying to understand myself and address what motivated me. I was asked to give this talk and I'd never done a commencement speech before. 

I was aware of work being done in psychology and facial expressions, so I used that context to suggest a theme. There are many expressions and emotions. Paul Ekman showed that those six expressions were universal across cultures. Conquering fear is the motivator. It’s the Mount Everest effect. You want to conquer it because it's there. 

It sort of just fell into place that the times in my life where I felt the most creative were the ones that were the most challenging or the ones I was least expecting. To accomplish something often begins with not knowing where to turn. Indecision and indirection are kinds of fear that lead to new thoughts or new directions



















"Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces"
Rosanna Queiroz, Sorraia Musse, Norman Badler.
Presence 23(2), pp. 191-208, Spring 2014.



​The impetus for discussing emotions was also triggered by hearing one of my colleagues say creativity is about finding your passion. That seemed like a real cliché to me. It seemed to me that telling people to find their passion was not helpful. 

There is a physics professor at Harvard, Lisa Randall who has also written some opera librettos. She was asked [on a PBS talk show] whether she thought there was any difference between artistic creativity and scientific creativity. She said so far as she knows, the only difference she knew was that scientific creativity could be proven wrong. Do you have any thoughts on that question since you deal in both realms?

I’ll be the first to admit that I'm not artistically creative. My family background is that my father had circles in the construction industry so, very early on I thought I wanted to be an architect. Architects understand how things are made and I was handy enough to be able to make things. As an undergraduate, I took a terrific course in modern architecture. 

I considered going to graduate school in architecture and then it dawned on me that I was missing a critical piece. I lacked the design creativity that characterizes good to great architects. I found I could do things; I liked doing things which were mostly constructive without having any artifice that it’s artistic.

And that realization was fine. It was better to find out early before investing in graduate school.

I fell into computer science. I had known how to program for a long time but I hadn’t quite recognized that there was something innately satisfying about creating programs - that was probably as equally satisfying for an artist who creates her own work. I didn’t view programming as an art. I viewed it as a satisfying creative enterprise.

A few questions about your Digital Media Design program. It’s refreshing that you're combining different disciplines – art, technology, and communication. In an age where the world is becoming more specialized, do you think that the emphasis on looking at different fields is something that other disciplines should be considering?

Well, there are a lot of people who talk about interdisciplinarity. In general, I think mixing disciplines is a good idea. I think cross-fertilization helps to take problems that present themselves in one field and embed them in another. I prefer to focus on applying solutions or attempting solutions in one field to another field.

I'm fond of telling students, very early on in the computer graphics course, that this is probably the most interdisciplinary course you'll ever take. The origins of computer graphics themselves are highly interdisciplinary. There was no evolution, no linear path like from physics to cosmology. Computer graphics drew from medicine, engineering, graphic design, art, education, and computation. Its growth was an interesting mix where no one thing was more important than any other. 

Not every field has the benefit of being able to say its roots are so widely spread. Anywhere that tree grows, I consider part of my field. 

I can appreciate how the arts and these other fields make for better computer graphics students. Is the reverse true? Do fine artists, illustrators, and painters become better artists because they take computer graphics courses?

The answer with respect to the Digital Media Design program is clear. Digital Media Design is an actual Engineering degree within computer science in an engineering school. 

Our DMD students are trained first and foremost as engineers. They also happen to be artistically creative to a greater or lesser extent. The flip side is that the fine arts students are welcome into our courses but in a sense the entrance fee is often much higher than they desire. Artists are not necessarily concerned about learning to program, proving theorems or understanding physics. 

There is such phenomenal software available today that artists can focus on creativity without necessarily getting into the inner workings of computer design.

Many of your alumni have gravitated to Pixar out in California. Where are some of the other places your computer graphics students go to work. 

We're very proud of our alumni. They’ve gone to places like Pixar, Walt Disney Feature Animation, DreamWorks, Industrial Light & Magic, and Blue Sky. In the past they’ve gone to Sony, Warner Brothers, Weta, and other big-name studios. In the beginning, the students that went to these places generally were Ph.D. students. They were well-trained. They were specifically educated in topics of interest. Then somewhere around the early 2000s, we had undergraduates who were able to compete in certain positions with the Ph.D.’s, often because they deeply understood artistic concepts.

Over the last 15 years, there’s been a significant change in recruiting from these major studios. They understand that programs such as Digital Media Design at Penn and at other schools meant they could hire computer graphics undergraduates. It does reduce the market for Ph.D. students because DMD graduates better fill a new niche position called “Technical Director”.

Do your students go into fields like finance or medicine or is it mostly the entertainment side of things.

All of the above. If we if we look at just the undergraduate pool within DMD, they divide into multiple compartments. Some go to game companies, although that's actually a small fraction because it's very hard to break in as a beginning programmer. Usually a game company is so aggressively oriented towards bringing product to market that they only hire people who have experience. 

We have a good number of students who go into the Internet side of things – Microsoft, Apple, Google, Facebook, Twitter, Snapchat, and those types of companies. There's a large pool of students who work for companies that build mobile app products. Another group decides they’re more artists than programmers – they go into graphic design, publishing, or broadcasting. Some former students are having a really good time doing work for NBC Sports and CBS Sports. 

Some students become veterinarians and doctors. Some of them decide to get their MBAs. What I really like is that they're not tracked into one career option. 

What computer graphics can we look forward to in the movies in the next few years?

There is so much computer graphics in movies these days it's hard to say what will happen next. The artistry and creativity that goes into using these fundamental computer graphics tools is what make the movies look great. It’s not because technical people have said here’s a piece of software, let’s see what it does. The artist says give it to me and let me see what I can make it do. That’s why collaboration is extremely important.

I think that we'll see a lot more work called digital doubles. These are fully 3D animated lifelike characters that are based on real actors. We already see them for stunts and for complex scenes. Of course, anything that looks like a monster is not a guy in a rubber suit these days. 

The integration of live action with virtual sets and real scenes is becoming more and more seamless. Virtual sets are now the norm in the industry. There’s no need to build epic-scale things. They build a little bit and the rest is virtual.

Getting the lighting right is really important. There are new graphics tools that are coming along especially at Disney, Pixar, and Blue Sky that are really oriented towards getting lighting correct. Viewers are very sensitive to the lighting. That's one of the things that pops out or stands out if done poorly. 

You also have the SIG Center which works with computer graphics and animation. And now there's ViDi. Can you explain these ventures?

The SIG Center is the laboratory space where we do our computer graphics work. Historically it's actually in the original ENIAC room. So, it's hallowed ground. The reason it's called the SIG Center is that six of my Ph.D. students went to work for this large local company called Susquehanna International Group. They were able to get a corporate gift back to Penn to renovate that space. 

The SIG Center houses laboratory space for student projects and meetings. It contains two research centers: ViDI and HMS. ViDi (Digital Visualization) is a play on the classic Latin phrase Veni Vidi Vici (I came, I saw, I conquered). HMS is the Center for Human Modeling and Simulation. Historically, HMS came first. It led to the Jack software and other projects. ViDi started in 2013, so it's in its relative infancy now. 

ViDi was designed with a very specific purpose, which harks back to this theme of integration we discussed earlier. 

Visualization Projects from the ViDi Center
Reading Terminal Market – 
Iovine Brothers Produce
Copyright 2016 Joseph Glantz




























The ambient image shows the Mosque of Córdoba rendered under simple Ambient Light showing a physically inaccurate and perceptually invalid result.

The caustic cone image shows a second view of the Mosque of Córdoba rendered under our Caustic Cone method producing a more perceptually valid representation. J. Kider, R. Fletcher, N. Yu, R. Holod, A. Chalmers and N. Badler. "Recreating early Islamic glass lamp lighting." Proc. International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST), 2009.
​The first project we had was almost by accident. We worked with an art history professor, Renata Holod. One of my DMD students had taken her class and built a 3D model of the Great Mosque of Cordoba which is a marvelous Byzantine era building in Spain. 

We looked at that model and said, well it looks really nice but lighting it the way we light computer graphics models meant the lighting looked like electric light. We knew that in the Byzantine era, there wasn’t any electric light. Renata Holod explained they were really lit by oil lamps and elaborate candelabras that had multiple glass containers. 

We wanted the site to look like it did when it was originally illuminated. We started a collaboration with one of my Ph.D. students Joe Kider and another colleague Alan Chalmers who was visiting us at that time.

We learned some really remarkable things. For example, if you light a candle, the light goes up. This meant filling these candelabras that were hanging from the ceiling with candles would've done nothing to illuminate the ground below where people were reading their prayers. 

The Penn Museum had some of these ancient glass fixtures. We found contemporary vessels of similar shape. When we filled a pointy-bottomed glass flask with water, olive oil, and a wick and lit it, the light goes DOWN. Who'd have thunk it, right? So, we did empirical experiments in the lab to quantify the effects. We took photos and videos. Then we built a computer model of this kind of lighting. 

So, the project served as kind of a double-edged purpose. One was to maybe simulate what it looked like back in that time. But the other is to imagine how they used their own creativity to solve problems back in the day?

Right. We wouldn't have thought of the problem if the art history professor hadn’t come to us with the challenge. We actually created an implementation that the art history professor could play with that was realistically done based on the actual flasks and fuels of the 14th century. We found interesting things such as the way you can focus the beam on the floor is by just changing the oil level.

Another project was for a very important influence in my life – my wife, Virginia Badler. She’s an archaeologist. Her archaeological problems became interesting problems for me because I saw that they weren’t being solved very well by other people. Again, that had to do with illumination studies. You need to understand how natural materials reflect light to understand how they really look in sunlight. You can't just guess. We completed that project a couple of years ago. It required recreating accurate models of mudbrick architecture and measuring how mudbricks and soil actually reflected sunlight. So, again, it's not a problem a computer graphics person would necessarily think of but it's a problem that archaeologists would like to solve.

Do you do any work with writers?

Not directly. We collectively believe storytelling is key to future interactive experiences. It's certainly my opinion, and that of many others, that showing people computer graphics in the absence of any story doesn’t work.

I think that with the emergence now of a lot of Virtual Reality display systems that the experiences will need to have some kind of story – once people get out of the mode of shooting as many zombies as they can. We haven't quite hit what those stories should be though.

I work with anthropology professor, Clark Erickson, who I teach a class with. He studies South America, especially South America before Columbus because there was a huge population of individuals there who were practically annihilated by diseases after the Europeans arrived. 

His main interest is in the pre-Columbian Amazon. We have a reconstruction of a portion of the Baures culture in modern Bolivia. The Baures were necessarily vested in modifying their terrain because in that part of the Amazon basin, half the year it was flooded and half the year it was dry. In order to survive, they built earthworks, waterworks, raised causeways, fish farms, and all sorts of things to control the landscape. We reconstructed all that using computer graphics. You can explore it in a video.

The next step is happening this summer. We have a small seed grant from the university to put together an interactive virtual experience. One of my undergraduates, Emiliya Al Yafei, had the idea to explore Baures by being a virtual small child walking around her village asking questions about how and why they do things such as - Where did the maize come from? Why are we eating so much fish? How do we rebuild after a flood? How do we build our houses?

We know these questions and the answers are important. Yet it's not a skill that I particularly possess but we haven't quite found the right person to write this interactive experience yet. Nonetheless we’re going to try.

You can learn more about Professor Erickson’s project in Penn’s Omnia Magazine – A Virtual World for an Ancient Society.

Recently, “Erickson and Norman Badler, Rachleff Family Professor of Computer and Information Science in Penn Engineering, taught Visualizing the Past/Peopling the Past. In the class, students helped populate a virtual world that will ultimately show a thriving landscape complete with hundreds of people going about their everyday lives—walking, paddling canoes, hunting, making fires, socializing, cutting down trees, tending fields, and sleeping in hammocks.” 

“One of the most interesting things was the social aspect of presentation of ancient people and how a lot of that is influenced by our social politics,” says Julia Bell, C’19. “We would talk about whether virtual reality models in museums could make things more or less democratic.”

“Badler emphasizes that the students do not use 3-D scanning to create the digital versions of the objects. Instead, students must study every detail of the item and figure out how to build the 3-D model from scratch, a process that parallels how the object was originally put together.”


Can you talk about the various trade-offs between working with the private sector first and the government second?

The corporate contacts we've had in the last three decades have been very focused on research and development. Basically, the agreement is that there's a certain topic and expertise we can deliver; then we can develop software for their needs. It’s a matter of ability integration. They recognize they don't have the innate human resource talent and skill set so it's efficient for them to delegate these missing components to us. These research contracts work pretty well but it's a buckshot approach because sometimes we get a contract, sometimes we don't. We can't have any kind of reliable long-term funding. 

It is very difficult to get reasonable funding from the entertainment industry. They will argue, and probably honestly, that the computer graphics practitioners at their studios are actually overworked relative to what they're paid. 

So, by the time you look at a blockbuster movie that makes mega dollars the graphics has been done essentially on a fixed fee. It's done. There’s some argument that there’s no leeway, there’s nothing to give away from any profit. Whether that's true or not, we never received funding from Pixar and Disney or any other studio.

Are some of the things you develop sold to private industry? I know, for example, that there's a technology transfer department at Penn. What are some of the logistics? How do you balance the creative and education side with the possibility of making money for the university?

Our intention is not to make money. That’s not our goal. Rather, for us, computer graphics is a door that leads to interdisciplinarity. I’m pretty certain about this claim: That computer graphics, as a technology, has the shortest lifecycle from generation to deployment amongst any industry. That, coupled with the fact that computer graphics people love to show off their work, means that as soon as it’s published, anyone can recode and use it.

The cycle time from publication to deployment of computer graphics in somebody else's software in a movie is about six months. That doesn’t leave any room in the cycle for the traditional technology development that's patented or protected. Trying to keep the technology to ourselves so we charge money just doesn't work.

My best example of that was a student here in the early 2000s. The student, Nick Foster, did his Ph.D. on fluid simulation - simulating water in tanks. So, he gets his Ph.D. and moves to DreamWorks. DreamWorks is in the middle of making the movie Antz where the climax of the movie is a flood in the ant colony. The original script called for artists to draw the water with traditional methods. They hired Nick and they said: hey Nick, you know - fluid simulation might actually work here, right? Nick recodes his fluid solver for DreamWorks. It looks great. It appears in the movie Antz six months later. 

A year later, Nick gets a Motion Picture Academy Technical Achievement Award. So, here's a guy who was literally rocketed to the pinnacle with his own technology in one year. That's not atypical. 

So, back to the question. Yes. We work with the tech transfer office. When we think we have good ideas, we file disclosures. But 90 percent of those disclosures never amount to anything. 

We've been very fortunate. At least two of our lab products have become corporate products. The first is the Jack human simulation software that was developed here years ago. The second is our MarchingOrder graduation software concept.

When I became Associate Dean of the Engineering school, I had a great fear (full circle here) that I now had to face going to every graduation ceremony the Engineering school had. Since high school, I hadn’t gone to any graduation ceremonies.

I feared I couldn’t sit through these things. I then used fear to think that maybe the ceremonies could be more interesting if we had a big screen that would display a picture and a message and maybe the major every time a student went up to receive the diploma. This way, 99% of families who wait for their kid’s name to be called could be entertained seeing who the other students were.

My younger son David, who was a student here, and a couple of his Wharton colleagues put together the software. They started running it here at Penn.

Then my son decided to opt out of the company. But one of the colleagues, Tyeler Mullins, still runs the company. The software, called MarchingOrder, is now used at graduation ceremonies all over the country. 

Can you talk about your involvement with the military and government?

Our first real government contract was with NASA. I worked with NASA for almost 30 years. They were fantastic. Everyone always felt that with NASA there was some humanistic and scientific good because we were actually involved in the space program. The reason we were involved is that the shuttle program had just started in the late 70s. They were mandated by Congress to make the shuttle flights accessible to nearly every American who was qualified to fly. 

Well, when you have that kind of criteria, all of a sudden you need to cover a full range - from very small men and women to big athletic tall guys. That's not how aircraft are designed. They’re designed for a very tight fit. So, they came to us and we worked to create a system to try to check out the ergonomics of these shuttle spaces. 

After about a year, we delivered the first workable product. There were rough edges but they actually used it for the shuttle toilet redesign. The last time I visited NASA they were still using a future generation of the software because it was really solving problems on the spaceflight side. 

On the military side, the work we did for NASA was noticed through our publications. It came to the attention of folks in the Army, Navy, Air Force, and their contractors. They gradually all realized that our human modeling software, called “Jack”, could help to design their vehicles and work places. Our Jack software was supported by all these organizations. 

There was a problem, though. It was good to work with the military because there was steady research income. But, sometimes, they would say – if I pay for it, how come I don't own it? I'd have to say - well the Army is paying for this feature but the Air Force is paying for that feature and the Navy is paying for this feature. And, oh, by the way - you get all their features. That's where Jack had its origins. 

We then started to use our Jack software to accumulate some industrial money. We got to work with some very good companies - Lockheed Martin, John Deere, Caterpillar, General Motors, and Ford I think. They were all paying license fees though we weren’t getting rich off of them. 

By 1996, Jack finally had to leave the university. If you asked me in 1996 what I did, I would honestly answer: I run a small business inside a university. Jack had become a massive undertaking. We had four full-time staff, 24 Ph.D. students, three international distributors and an annual user’s meeting. Even though we were getting good money, it just became a software enterprise and the university was not the place for it anymore.

I think you sort of answered this question before – but what are the corporations and government looking for when they hire students?

Students with advanced degrees end up going into academia or into companies that actually do have a research enterprise. The companies in that category have changed a lot over the years. It used to be Bell Labs and IBM were big consumers of Ph.D. graduates. Now, companies like Google or Apple are hiring them. Many Ph.D. candidates do want to teach. They’re in it for the long run.

Do you work with any Philadelphia institutions such as the Franklin Institute Science Museum or the Philadelphia Museum of Art. 

No. We had one joint project with the Franklin Institute many many years ago. We have an open mind about those things but there are problems such as working through the intellectual property issues.

It’s easier to say - look I've got a whole pool of great students here. If you pay them, we'll provide expertise and then you can own everything. 

That's worked very successfully with Children's Hospital. One of our students, Warren Longmire, went to develop games for autistic children. They just hired the guy and were really happy with his work. That kind of model works better for us. 
,
When I first started working with computer programs, I began with a broad outline and then filled in the details. Are there current computer programming methodologies?

I think there are probably three standard ones. The first one is if someone has written the software you need to go and buy it or if they publish it then reproduce it. Is there a solution that one can use or adapt?

The second is that most computer graphics programming is done in C++. Our DMD students need to learn C++ while regular computer science students may not need to learn that particular language. C++ is kind of the lingua franca for computer graphics. 

The third paradigm is very interesting. It’s called visual programming. The best example of that is the interface system called Blueprint to the Unreal game engine. Blueprint uses a visual paradigm where you have blocks and literally connect them with curved lines. You build your program visually by laying out blocks that generate data, provide timing, give you color. Then you wire those blocks together - in a dataflow organization - to actually generate animated 3D graphics on the screen. It’s very different than the linear textual programming paradigm that we teach in computer science. 

Can you explain the interplay between software, hardware, electronics, and computer theory?

I will restrict my response to computer graphics. The technical folks in computer graphics have pushed very hard for efficiency. If you can create one image in five minutes, that’s not good enough. You want to create that image in five seconds. And even that's not good enough. You want to create it a 50th of a second so you can start to animate it. There's a constant pressure to make things faster. 

Concomitant with that is sort of the architecture of computer graphics. The algorithms that make images have become much more widely understood and have coalesced in a few different paradigms. The principal paradigm actually ends up looking like a pipeline. Very early on – 1981, over 25 years ago – embedding computer graphics algorithms in hardware started to be a thing. That was carried forward by Silicon Graphics for a long time.

Then someone smart said well we don't need to buy all this other stuff. Why don't we just embed the computer graphics on a board? The first computer graphics boards were built and slotted into IBM PCs: a self-contained board that you could just slide in.

That spelled the death knell for special purpose graphics hardware. It led to the development of graphics processing units (GPUs). They were initially optimized just to do graphics but now they have been generalized in ways where they do all sorts of other computations.

A principal company that does these graphic boards is NVIDIA, which now thinks of itself more as an AI company. Their hardware is used in self-driving vehicles because they’re processing images. They do deep learning. The hardware is used to mine cryptocurrencies, too. 

But graphics was the driving reason for this hardware. The graphics processing unit as a piece of hardware is programmable. Its architecture is different from a CPU (Central Processing Unit), used for general computation.

You do have to learn how to talk to it in the right way, but the capabilities are so phenomenal. It's truly a parallel processor. It's the most powerful computer in your workstation; more powerful than the CPU. Many supercomputers are actually built by just taking hordes of graphical GPUs to gain parallelism. So, we've impacted the field that way. 

The answer to your question is, therefore, it is important for our students to understand that there are CPUs and GPUs, and they need to communicate and interact. We have at least one course here at Penn, taught by Patrick Cozzi, that really gets deeply into programming the GPU. 

Can you explain your work with crowd simulation? What applications are there for it?

We did work in this area for about 10 years but we’ve moved on. We moved on because crowds are too homogeneous. We’re more interested in places where people are more differentiated.

The crowd simulations were generally used for evacuation and disaster panic. That's the most common real application of crowd simulation.

Crowds are an interesting problem from a computer science perspective. You don’t want individuals bumping into each other or passing through walls or each other. So, there are lots of computational problems that can blow up when you have large numbers of individuals. 

After producing four books on this topic, I decided to leave it aside.

Was your software ever used for the medical school?

Almost. What we had briefly was a project from the Advanced Research Projects Agency to build a simulated medic. The idea was to help train medics for their first encounter with a real injury. We collaborated with a top Philadelphia trauma surgeon, John Clarke. He was our expert. We actually built a physiological model of a human to allow that model to respond to injury and interventions. 

The set up was basically - you would see this guy lying on the ground. The medic would have to figure out what tests to do and what interventions to apply in order to stabilize him. You get five minutes. And if you don't do the right test or you don't do the right stabilization, your patient will die. 

It was a real nail biter for people who tried it. We did speed it up, however, to make the experience even more compelling. For example, if you failed to notice that the guy had a tension pneumothorax, he would suffocate. You needed to put that chest tube in to let the air out. We could even change the skin color, show distended veins, and animate the chest breathing abnormally.

That project lasted for about three years and then it was moved to the Sandia National Laboratory. We were then no longer part of it though they continued it for years. 

How much of what you do with computer graphics impacts or is impacted by other computer fields? Let’s start with artificial intelligence. 

My own position in the world has often been in-between computer graphics and artificial intelligence.

There's a long history of Penn’s computer science department working in collaboration with other departments like psychology, linguistics and philosophy to do what's generally called Cognitive Science work. For many years, computer graphics was part of that broader community. 

Many of my closest collaborators have been Natural Language understanding people: Bonnie Webber, Mark Steedman, and Martha Palmer. I maintain an existing collaboration with one of them as well, Ani Nenkova.

We are simply interested ultimately in why humans do what they do. You can't study that in a vacuum.

So even though I'm not into formal reasoning and deep learning (a new thing in AI), throughout my whole career we often had papers in what's called animated agents or embodied conversational agents. These are human- or character-like virtual beings that you could talk to or interact with. We have a long and steady thread in that domain. It got us involved in facial animation, eye movement, micro-expressions, and understanding gesture.

So, yes, I think that AI connects really well but it's so big and machine learning-oriented that I do keep at a distance.

What about computer graphics and 3D printing? 

I don't do anything in that vein. Another faculty member at Penn does do 3D printing. In my opinion, 3D printing has been greatly benefited by the research done in computer graphics. The difference is that our products are just images, while they print 3D “images” using mechanical processes. Some of the nicest work that we see at the annual computer graphics conference involves fabrication. There’s a tight link with the broad computer graphics 3D modeling community though I don’t work with 3D printing myself.

Are there any other computer fields where computer graphics and that other field intersect?

Computer graphics and computer vision overlap a lot. The primary reason is that computer graphics at its core is about building 3D representations of stuff. A big component of computer vision is having sensors like cameras look at the world and try to build 3D models of stuff. So, computer vision becomes a very convenient though challenging way of getting complicated stuff into a graphics representation for further reasoning and use. Those two areas are very close. 

How many women are now in the program? 

That's a great question. DMD is two-thirds women. Our master’s program is about 35% women. Of the Ph.D. students I’ve worked with, about 20% have been women.

​Our DMD Associate Director Amy Calhoun is a great asset both as a former Penn admissions officer and as our liaison to industry recruiters.

We're very encouraged that the DMD numbers have a good majority of women. I think it helps that being artistic is an important part of our program. It’s important enough to motivate learning the technology. What we say with DMD is that there's a main course of computer science but you get a side dish which is the art. It’s a nice way to show off their artistic and creative sides. 

Is the Digital Media Design program still working with the communications school?

That's an historical artifact. We still suggest that people take a communications course. This goes back to something I said earlier which is if you just do computer graphics for its own sake, it doesn't really have any storytelling power. If you're going to do computer graphics, there ought to be a reason and that reason is to communicate something.

Any advice for high school students who want to get in your programs?

As I said at the start - figure out what you're afraid of. I’m encouraged that the exposure to computer graphics now is fairly ubiquitous.

When I speak about the difference between computer engineering and computer science, I say well look at your phone. If you're more interested in running apps on it and figuring out how the apps work, then you're probably going to be a computer scientist. If you're more interested in how that thing works at all, you’re a computer engineer. 

Do you have any thoughts on using computer graphics to envision a future 30-50 years down the road?

When we had the turn of the millennium, I was asked to write a short piece for the major computing magazine: The Communications of the Association for Computing Machinery (ACM). They were doing a series of articles. They asked me to write on what computer animation would be like a thousand years. That's impossible to answer, so I cut it back to like 50 years. 

But there is a vision and it's a vision that I see each year being more and more realistic though we’re not quite there yet. It's my belief, though not my idea, that at some point displays will become just about as cheap as construction sheetrock. At that point, there'll be no reason to build buildings and walls the way we do now. Instead of paying for painting and design, we'll just put up display screens everywhere. 

First, we can constantly change our surroundings - changing decor as our moods or seasons change. I want a window there, no I want that window here. The interesting part of that is not actually the technology. The interesting part is how do I make all this work so that it becomes a livable environment. I don't quite want to do Minority Report stuff where I have to go up to a display wall and start moving things around manually. And it's not quite the Alexa model where I just speak - let's do this and it just happens. 

The environment itself must be much more awareness of my everyday activities - where I am and what I'm doing - so that the environment can be responsive to my needs too. 

How many times have you taken a call and walked into another room and realized you know you left behind the information you were looking at? Well why doesn’t that information just follow you as you take your call? These are all user interface issues and I think they're interesting. My colleague Stephen Lane and I have talked about setting up a room like this but it's still too expensive. It would be a lot of fun to realize that. 

So, I think that in the future we will have all this graphics technology in our households and it will be the 3D embodiment of Alexa anywhere and everywhere. Maybe she (or he) will be a virtual embodied being as well.

Do you talk to colleagues throughout the world?

We are an international community. Many of the published papers cross international boundaries. I have colleagues in London, Brazil, Europe, and Asia. 



Copyright Joseph Glantz 2018