Liberatory Business with Simone Seol

65. Responsible AI stewardship: a Buddhist perspective (with Billy Seol)

Simone Grace Seol

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:15

My brother Billy Seol — software engineer, and Buddhist life coach — is one of the sharpest thinkers I know. 

And he has thoughts about AI I haven't seen anywhere else that really enlightened me, and I wanted to share them with you.

Listen to hear more about:

  • How we are exiting the Creative Age, and entering the Generative Age
  • The new group of people who will be marginalized by AI, and how we should protect them
  • How AI is being shaped by the Buddhist idea of karma, both at the individual and social levels
  • Measures we can take to reduce our digital footprint

If you've been trying to figure out how to be a conscious, ethical human being in this moment, I think you're gonna love this episode.

______

Connect with Billy Seol at: https://www.julylifecoach.com/

Welcome to another episode of Liberatory Business. I'm your host, Simone Seol. Thank you so much for listening.

Over the last few episodes, I've been talking about AI — we just talked to Tallulah la Merle — and we're just gonna keep going with the AI theme. Because as a human being living through this time, as someone whose work intimately concerns humanity and the question of how to steward ourselves through this dizzying, discombobulating moment, I can't stop thinking about AI and having conversations about it with smart people.

Today I'm talking to one of my favorite smart people: my own brother Billy Seol, who is a software engineer who has worked on some of the most cutting-edge technology in Silicon Valley. He also happens to be a life coach with a deep Buddhist practice who sees everything through a profound Buddhist lens.

So when I asked him what his thoughts are about AI, what I heard was so eye-opening and interesting that I really wanted to bring you that conversation. You're gonna hear about how we are exiting what he calls the creative age and entering into a generative age — and what that means for you and me. We're talking about AI and karma, about the new underclass that Billy thinks this technology is going to create, and about being intentional about our digital footprints as an ethical practice. And largely, what it means to be a conscious, responsible human being living through this moment.

I hope you enjoy the conversation.

From the creative age to the generative age

Billy: The way I think of it — it's my term — is a post-creative world. The creative world was about creating things that didn't exist on the internet and putting them there. It didn't really require a big depth of knowledge or proficiency. It was just, hey, the internet is so new, so I'm gonna shoot a YouTube video of my dog. Why? Because such a video doesn't exist on the internet yet.

So we had that whole phase of people wanting to put stuff on the internet. That's what AI is trained off of.

Now you have all these things that AI can basically replicate. So humans are actually reacting to this by, ironically, being even more generative. More generative than generative AI, even.

Right now, most isolated data points and knowledge points exist on the internet. The internet pretty much knows everything that there is to know, practically. It's good at combining these things and creating something out of them based on the prompt. So I could say, hey ChatGPT, draw me the parallel between Buddha and Hitler. That's a really random query, but it knows a lot about Hitler. It knows a lot about Buddha. So it's probably able to draw some parallels.

But now we're in a generative period where humans are trying to create something that can't be replicated by AI — by focusing on what AI is weak at. For example, there's an Instagram video I found really interesting: bad handwriting. AI can't replicate that.

Simone: Wow. Your handwriting is terrible. Great news for you.

Billy: Mine is very — yeah, it's amazing. And for visual artists, layered mediums are very difficult for AI to generate. For example, I want to stamp on a newsletter. AI is like, whoa, how do I get that aesthetic? It doesn't know how to do it. But to humans it's so obvious.

Simone: What do you mean stamp on a newsletter?

Billy: So when you put a "good job" stamp on something — like a star sticker — on white paper, AI knows, oh yeah, I can make a drawing of a stamp or a sticker. But then you have the newspaper, and you cover half of the newspaper with transparent paper, then you put a stamp on top of it. So it's this layered look. To humans, it's so obvious what it is. But AI is like, whoa, I have no idea how to create something like that.

Simone: Isn't that just a question of technological progress? Don't you feel like AI is gonna catch up to that really soon?

Billy: It will. And then humans will find another weak point.

Simone: Ah, so it's like a race — who can get ahead of the other?

Billy: Right. AI wants to be as human as possible. Humans want to be not replicable by technology. But the more humans do that, ironically, the more it feeds AI about what it means to be human. So it's gonna keep going like that.

But here's the key differential point. I mentioned earlier that AI is largely combinatory — it knows a lot about Hitler, it knows a lot about the Buddha, that's why it's able to combine them. But this actually applies to humans too. Every new thing is composed of old things. You can't have an actually new thing, because it has to be built from something that exists.

Simone: Right. There's nothing new under the sun.

Billy: Right. But if that's true — if both AI and humans are combinatory in the same basic way — aren't they the same?

So here's the part that's important. There's a thing in computer science where we say: given a monkey with a typewriter and infinite time, it'll write the entire works of Shakespeare. Give a monkey ten months, it'll probably have one instance of "the." Give a monkey five years, it'll eventually stumble upon "to be or not to be." Just based on probability.

So to AI, yes, the output is combinatory — but it doesn't mean anything. Because it doesn't know what meaning is until a trained dataset says, hey, this is meaningful. But we have an embedded, innate value determiner. This is valuable to us. This is meaningful to us.

Simone: Like every human being just knows that.

Billy: Right. And ironically — related to the topic of Buddhism — this is what gives rise to binaries. This is what gives rise to suffering. Because we all want to do meaningful things, which has to mean there are meaningless things. And so we keep chasing the binary, and so on and so forth.

So suffering is a very human experience. But at the same time, the root cause of suffering is exactly what makes us different from AI. No matter how advanced it is, it's that we attribute meaning to things.

Humans have the capacity, once they see the dharma, once they're enlightened, to see beyond their perception of meaning. Yes, I know that cockroaches are all void and formless. But when I look at a cockroach, I'm still gonna flinch. I can see that the cockroach holds meaning for me while also being meaningless when I look at it through the eyes of the dharma. Humans have this capacity. But to AI, it's just the monkey with a typewriter with infinite time.

Simone: Mm-hmm.

Billy: So this is the post-creative world. This is what I call the generative world.

The new underclass

Billy: With every societal shift, there's a new group of marginalized people. In the past, the strong ruled everything — which means the weak were marginalized. Then came religion. Even the most powerful guy still had to bow to the Pope, which means people who were incompatible with the faith were marginalized. Then came money, so people without capital were suddenly marginalized.

When we were growing up, if anybody said "I'm an influencer," we would have been like, the hell is that? But nowadays, influencer and content creator is an established job. Being able to create content is really important. But this means there's a new group of marginalized people — people who are unable to create. People who only consume.

And this new group of marginalized people is now going to evolve in the generative world. People who cannot generate human output are going to be the new marginalized group. People are going to want more and more proof: I am having human experiences.

It'll hit the scam market first. For example, mom's gonna get a text message from a scammer containing a video of me — oh mom, I need a million dollars, I'm trapped in, I don't know, somewhere.

Simone: Our mom would completely fall for that.

Billy: Right. Catfishing is as old as time itself, but now it's gonna be even more convincing. So basically, people in general are going to gravitate more and more towards wanting to make sure they are having human experiences. But if you can't demonstrate that? People will naturally gravitate away from you.

And that is basically going to create a new underclass.

So we need to think about new social safety nets. As health and longevity came along with humanity, we started creating safety nets for health. As capital became important, society started forming more and more safety nets around capital — that's why you have bankruptcy protection. As the rule of law came along, people needed to be able to represent themselves in the court of law — that's why you have public defenders.

So as we move into this new era, we have to start thinking about what kind of social nets we're willing to create.

Simone: So what does that look like?

Billy: A simple one I can think of: if your identity has been completely compromised — kind of like witness protection, but for the digital age.

In Korea, you don't really see a lot of clerks working at shops, right? It's all point-of-sale screens. The elderly can't even buy movie tickets or get a hamburger. So more digital literacy is sorely needed as a safety net. Whoever doesn't have it is underclass.

These are some of the things we need to think about as a society. Whether we like it or not, I think we've kind of crossed the line.

Simone: The ship has sailed.

Billy: Yes. It's cool to generate text for awkward social situations, but how do we want to steward the usage of AI as conscious people?

How to use AI responsibly

Simone: That is exactly what I wanted to ask you — from a Buddhist perspective.

Billy: So if you're thinking about using AI responsibly, you have to start thinking about how people who are less advantaged than you can take advantage of what you're doing right now.

These are things that don't even necessarily apply exclusively to AI. If I'm getting good healthcare and I save a lot of money because I have good insurance, maybe it's a good idea to donate some of my savings to third-world countries where there aren't a lot of reliable medical resources. Societally, it's important for the billionaire class to do more of these things — but I like to be the owner of my life. I'm not gonna wait for everyone else to change their minds. I'm gonna start taking responsibility.

So if I am going to use AI, how do I use it for the public good? How do I leave the earth a better place than before I used AI? These are things I like to keep in mind.

Simone: So what's the answer for you?

Billy: For me, I like to reduce the digital footprint of people.

Every once in a while I ask people to delete my emails. Why? Because email storage takes up a lot of space digitally, and everything's copied over so many times.

What a lot of people do is purchase platforms and sign up for things they don't necessarily use. Every time you sign up for a platform, you think it's just that platform, right? But the browser you used, what computer you're on, what location you used to log in, what other sites you visited before coming to that platform — all these cookies multiply and multiply.

So why do I simplify the technology stack for people? Because I want to reduce the global footprint of technology. When I do technical work for people, yes it generates tokens — but if I can help someone delete their Kajabi account, their Squarespace account, their whatever-platform account, that amount of cookies, that amount of cached information can be safely removed.

Simone: It's kinda like trying to be carbon neutral. You're trying to be digital footprint neutral.

Billy: Mm-hmm.

Simone: Interesting. And it's ideal if you can get net negative, right?

Billy: Right. After a certain point when you stop, it could be. That needs to be analyzed over time.

Think about how gyms make money: New Year's resolution, bunch of people sign up, and they never come — and they never cancel. You pay for all these platforms, and every once in a while you log in, which refreshes things, and they send you emails, and when you open the emails it signals back: hey, this person opened the email. So even if you don't use the platform, the footprint you keep generating is enormous, accumulated over time.

If I can help you offload these platforms, it cuts that long remaining tail off cleanly. And it turns out it saves your money too. Win-win.

Another thing: when I onboard people onto my methodology, I don't give people entire computers to work on — I rent a small subset of a computer. Multiple tenants occupy one computer, which ends up saving more digital real estate, so to speak. If you and I both have websites and they used to use two entire computers, now we share a computer with ten other people and achieve the same thing. Which frees up a lot of computers over time.

This is going to reduce — hopefully, if this trend continues — the amount of hardware that companies end up buying. Graphic cards get released, and Google buys ten thousand of them. There are none left for consumers. This happens all the time these days. By reducing digital real estate, you reduce the demand for more and more hardware.

So yes, I am using AI — but for a greater good.

AI and karma

Simone: You said some things about karma earlier. Can you talk about that — the particular Buddhist way of understanding where we are as a society with AI?

Billy: In Buddhism, there's a concept of dependent origination. Everything happens for a reason — that's a greatly oversimplified way of saying it. If I have tomatoes, 100%, there used to be a tomato seed. But it doesn't necessarily work the other way: just because I have tomato seeds doesn't mean I'm going to 100% have tomatoes. The other direction always works though.

So we have AI. If you have a problem with AI, something very important to think about is: why do we have it? There are tons of new things that get discovered but don't get used. Why did we come to a state where we find AI valuable?

Even if you personally are like, "I don't think AI is valuable" — the truth is the use rate across the world is staggering. People are voting with their behavior.

Billy: It makes a lot of things accessible. Oh my God, my boss is expecting a professional reply but I'm socially awkward — how do I navigate outta this situation? Well, now you can. Oh, I have a job in my second language and I don't understand what this text means and I need to reply to the client. Okay, I can just use that.

As a society, we value ease. So ease is just one example. But as a society, we value a lot of different things that ultimately led to AI. This is why we both say AI is here to stay — because what makes AI valuable in the first place never changed.

Yes, the environmental impact is bad. Yes, people losing their voice and just continuously relying on AI is bad. Yes, it's enabling more data theft by corporations. Those are all things we get to recognize from the present.

But we did nothing to change our existing value system of I want things to be easy. I want things to look prettier than I can make. I want to look good. I want to be more productive. I want to get more things done faster.

And those cultural impulses are exactly what led to the rise of AI.

Simone: So if that doesn't change—

Billy: There is no reason for AI to change. You're just going to notice more problems with it.

So if you want to dismantle AI from within — Operation Valkyrie-style — you have to start thinking about the value system you inherited from society. I.e., karma. And you have to propagate your values to the world, saying: hey, fast progress and beauty and productivity — these aren't the only things that matter.

Simone: It's like saying, "because of the environmental impact and the data theft, I'm against AI" — but in your personal life, you still want things to be easy for yourself. You still want things to look impressive. Then we're at odds. That's what you're saying.

Billy: Exactly. It's like saying, oh my God, the poor animals in the factory farming system, and then going out to eat a big steak. As long as the desire for meat doesn't change, yes, you can point out ten thousand problems with the industry — but if you still wanna look skinny, nothing's gonna change, because those corporations are going to endlessly feed that human desire.

I like to explain karma with changing your dominant hand. Changing your dominant hand requires two things: skill development and proficiency improvement in the new hand, but also — you need to stop using your old dominant hand.

Many people just think about I wanna use my left hand without thinking about how to change their relationship with their right hand.

Why Billy changed his mind about AI

Simone: Okay, I'm a coach. You're a coach. We both love coaching — and when you and I talked a few days ago, you're not categorically anti-AI anymore.

Billy: Used to be.

Simone: Used to be! So tell me what changed in your thought process.

Billy: I overestimated and underestimated AI at the same time.

I was unilaterally against it because I value challenge. I like difficult things because they help me see myself, understand myself, experience my life as a human being. I was against AI because I thought if I use it, it's gonna make so many things so much easier — and I didn't want that.

Turns out, some things will still be difficult and challenging even if you use AI. So I greatly overestimated it in that regard.

But I also severely underestimated how much of an unexpected positive impact it can have.

At first I was thinking — simple example — hey, I want to learn how to cook a really nice steak. What should the internal temperature be? How should I season it? So I go in just thinking about steak. But as I'm talking to it, huh, I learned a new knife technique. Huh, I learned a different seasoning technique. Huh, I learned about the qualities of different pans. There are a lot of side effects from the primary objective that I had severely underestimated.

The primary reason I started using AI is that I recognized that stubbornness, that insistence on myself: no, I'm 100% right. And as a Buddhist, we always have to be wary of that kind of certainty and absolutism.

And the most important thing I want to highlight: when you're walking, you get to look at the individual flowers in your neighborhood. You get to see the little cracks in the sidewalk. You get to see the seasons change, a new plant sprouting. These are things you notice when you walk. When you drive, you can't see those things — but you get to see the landscape.

Simone: So it's not that you're not able to see anything anymore, it's that you get to see different things. Like, you can't see that tiny dandelion looming through the crack in the road, but you get to see a vista.

Billy: Exactly. And when you fly, you can't see any of those things — but you get to see an aerial view. If you go on a rocket ship, you can't see the aerial view of the Grand Canyon. You get to see the earth.

So with every speed, there's a new vantage point.

But if I'm comparing walking to a rocket, it's unimaginably fast. What we're experiencing right now is a sudden whiplash — I was going five miles an hour and now I'm going six hundred miles an hour.

Simone: Right. And so there are all these ramifications from the shock of the sudden change.

Billy: And that is something people will have to get used to. Which brings me to an interesting observation I've been having. I'm experiencing a lot of people's personal reactions to AI. I see people tiptoeing around it.

If it's not too much for you... can you do this?

There's a lot of fear around am I allowed to ask this? And that is something I also experienced at the beginning of this month when I started my ventures into AI. Like, wow, am I allowed to ask this? And I think that's the old speed talking. If I'm riding a bike — whoa, can I go a hundred miles? I feel like I would die. But if you're in a car speeding in California, you can go a hundred miles.

Watching yourself use AI

Billy: Another thing I'd like for people to keep in mind as they try to practice more mindful usage of AI: I'm not talking about the coaching or therapy conversations you have with AI, but how you interact with it. When you are able to look at yourself, it can be a great exercise in mindfulness.

I have a funny story. I give a lot of engineering tasks to my AI agent. And the AI agent says, hey, it's done. I go in to verify it. I asked it to do ten things. And it always does like 90% of them — and doesn't do one. And it tells me it's done.

So I'm like, hey, you have to finish things before you tell me you're done. And then I noticed myself over multiple days getting ticked off at this.

Simone: Like, what's wrong with you?

Billy: Right. Other problems too — sometimes it's too slow, sometimes it has an unexpected output. Those things I don't have a problem with. But when it tells me it's done without being completely done, for some reason it pisses me off.

Because I'm so actively watching myself, I find this fascinating. Why does this one get me?

And you know what I realized? When I was working as a software engineer, my manager would give me a bunch of tasks. I'd look at the task list, solve the really difficult one, and be like, fuck yeah, I'm the best. I'd tell my manager, oh, it was so hard, but I did it, it's done. And the manager would look at it and say, what about this part?

Simone: There are three — four other things on the list.

Billy: And I'm like, come on, dude. I did the hard thing. What the fuck? That takes five minutes, it's not even considered work. Give me some credit for the actual difficult work I did.

So now I'm recognizing: oh my God, I'm meeting my karma. Now that I am technically the manager for this AI agent, I'm thinking — oh my God, I was a horrible employee. I kept telling my managers it was done, but it wasn't.

Simone: So what's the lesson for folks?

Billy: Because I was mindful throughout my usage of AI, I got to understand something. In mindfulness, you have to first understand what your calm state feels like. And then when you get overly excited about something, you are not at your calm state. Whether you're overexcited in a good way or you're agitated and angry, whatever.

And when you notice yourself in that altered state while using AI — it's a great opportunity to learn more about yourself that you wouldn't have had otherwise.

Simone: You're always being given information about who you are. And if you don't notice it, the pattern will just keep running as it's been programmed — and you'll never have the chance to interrupt it.

Billy: Right. So that's basically AI and karma in an anecdote.

Societal karma and the "slop" problem

Simone: You also said some things about karma on a societal level.

Billy: Right — we kind of covered that when we talked about how we want easy and fast and cheap, and what we value. How AI agents reply is going to differ culture by culture. I'm learning French these days, and sometimes when I create French resources for myself to practice, I can notice the tone is different. Same with Korean.

In Korea, people have an allergic reaction to overly pandering, ass-kissing responses.

Simone: Sycophantic. Yeah.

Billy: But at the same time, Korea values a kind of modesty — lowering yourself, being humble. In America, people have a more pronounced yuckiness to it, like, why is it kissing my ass so hard?

So how language-specific AI agents develop is going to be largely shaped by societal karma — societal patterns — and the more data it gets, the more it reinforces the direction it's heading. It's going to amplify existing patterns, which is kind of a dangerous thing too. Marginalized people and marginalized topics will become even more marginalized. Sensationalist and hot topics will be even hotter and more sensational.

Simone: I think more than ever before in history, we need people in calm states.

Billy: All the things that are happening in the world right now are a result of people suffering and trying to resolve it by addressing the symptom instead of finding the root cause. And the root cause — again — is how we attribute meaning to things.

But that is ironically what sets us apart. No matter how much AI mimics us, we give meaning to things. It's a beautiful thing when you can hold that power beautifully. It is a reason to suffer when you are misguided in that power.

Simone: Mic drop. Okay. That feels like a really beautiful place to land. Is there anything you were dying to talk about that we haven't covered?

Social safety nets and the "slop" debate

Billy: Actually, I think we started with the topic I was dying to cover — that we have to start thinking about social safety nets. For people whose identities will be compromised. For people who cannot generate human information because of disabilities or other kinds of impediments. It would be great if we can use AI to create these safety nets, because it gives us more power to do that now. The average human is much more powerful now.

Simone: One of the ideas I got most excited about was the potential of using AI profits to institute universal basic income systems. I know we're probably far from being able to see it instituted at a big scale, but I think that is a really, really exciting possibility.

Billy: I have a deaf friend, and making videos for my friend is so much harder because the captioning system relies on voice. My friend can't speak. So every time my friend makes a reel and signs, I have to go through the process of manually captioning everything, while everybody else just clicks a button and moves on. Compare the volume of materials my friend is able to produce versus what other people produce.

Which reminds me of one topic — sorry — that I'm kind of dying to talk about. Slop.

One person's slop is not necessarily actually slop. Because you never know what that generated output can inspire in another human being. What I generate, somebody can classify as slop. What anybody makes — like a cat video that my dad makes — I can call it slop. I can say, dad, stop making slop.

Simone: Yeah.

Billy: But to my dad, it's a creative output that can inspire him to think more about the art he wants to create.

Simone: That's such a good point. What people are doing nowadays is: anything they don't like or don't agree with, they call AI slop. But sometimes you just don't like it. When a post of mine goes viral — which has been happening a lot recently for some mysterious reason — I get tons of trolls who are like, "AI slop!" But I wrote it. You just don't like it.

I think people don't talk about the moral problems that come with dismissing another person's creative activity. And I am very guilty of this — everything dad makes, I'm like, dad, stop, this is... but you're right. There's a lot of judgment. You don't know what that means to someone, what it's inspiring in them. Everyone has a right to their own value judgments — but it's kind of morally absolutist to say something has no value because it doesn't meet your criteria for value.

Billy: Right. And this comes full circle — because being dismissive of "slop" indicates that even when we're using AI, we want it to look pretty. We want it to be sophisticated and professional.

So our attachment to value systems — that is really the key point I would like for everyone to take away from this episode.

Simone: Thank you so much, Billy. Where can people find you?

Billy: julylifecoach.com — a new website, very streamlined. You can find everything I offer there.

Simone: I still think you should change your branding name to your actual name. Okay, but that's my value system, not yours. So I'm gonna shut up about it for now. Thank you so much again. You are so wise, and I love you. We'll talk to you next time. Bye.

Billy: Love you too. Bye.