Chapter 20: Shame and Dignity
One day, Jiang Yuecheng was immersed in writing code when his team leader interrupted him and led him into a nearby conference room.
"Engineer Jiang, I need to discuss something with you. Someone from another team has left, so the boss decided to assign their tasks to our team. The new task involves the concept of a robot's shame and dignity. I think your approach to 'bravery and fear' is quite clear, and the progress on the previous tasks has been smooth, so I want to assign this task to you. What do you think?"
"Shame and dignity?" Jiang Yuecheng felt a little confused, and he quickly started searching his mind for related information. However, he found only fragmented bits of knowledge, making it hard to form a coherent framework. He shyly nodded and said, "I don’t have much of a concept yet..."
"This is a new topic, and no one has a clear concept yet. Just like when you first encountered the topic of bravery and fear, you probably didn’t have a clear concept either. But for a talented programmer, the ability to quickly grasp unfamiliar concepts and structure your thoughts is something that others lack. I believe you have this ability," the team leader smiled and encouraged him.
"I'm still far from being a top-notch programmer, and I feel like the topic of shame and dignity might be even more complex than bravery and fear," Jiang Yuecheng said, following his intuition.
"Indeed. The person who left was actually fired because their progress was too slow. The boss was in a hurry and decided to let them go and find someone else to take over," the team leader explained.
"Wait, if I take this on, won’t it be risky?" Jiang Yuecheng joked, though he actually was quite eager to take on the task.
"Hahaha…" The team leader laughed. "No worries, your abilities speak for themselves. After reviewing your work on bravery and fear, I’m confident you’ll succeed."
Jiang Yuecheng felt a bit excited by the compliment. He had just joined the company, but to his surprise, his leader already recognized his potential.
However, his intuition told him that the mathematical model for shame and dignity would be quite complex, probably more so than bravery and fear.
Drawing from his recent studies in psychology, he believed that fear and bravery were relatively basic instincts, corresponding to a lower-level need—safety, which is the second level of needs.
In other words, bravery and fear primarily serve safety needs.
Many more primitive animals possess the instincts of fear and bravery. For example, crocodiles, being ancient creatures, also have instincts related to fear and bravery.
On the other hand, only higher-level animals experience shame and dignity—such as lions, tigers, and eagles, creatures at the top of the food chain, who possess a sense of superiority.
Shame and dignity correspond to the third level of needs, which are the highest human needs—those of honor and glory.
For example, people with higher status and intelligence tend to feel shame and dignity more strongly because they care more about honor.
The foundation of this sense of honor is security—meaning that once a person feels secure, they can then build a sense of glory upon that foundation.
For instance, a person who has a home, a car, and no worries about food or clothing has achieved basic security, which satisfies second-level needs.
In contrast, a beautiful wife, an extravagant house, a high-end car, fashionable clothes, and gourmet food are all expressions of honor, fulfilling the third level of needs.
Honor is derived from comparison with others. If I have something you don’t, I feel honored. If you have something I don’t, I feel shame.
But what exactly creates this sense of honor? This comes down to values.
What most people consider worth having represents the societal values.
Therefore, to endow a robot with a sense of shame and dignity, it must first be endowed with a set of values.
To give a robot values, it must have desires. Only with desires can there be gain and loss, and only through gain and loss can there be honor or shame.
Thinking of this, Jiang Yuecheng frowned and said to his team leader, "But robots don’t have life or desires. How can they experience shame? We’d first need to give them life and desires before they could have dignity or shame, right?"
Hearing this, the team leader laughed again. "Haha… I see! I think you’re onto something!"
He continued, "The boss is personally leading a team to study the framework for a robot’s desires. For now, you can assume that robots have basic desires. With this assumption, you can structure a framework for shame and dignity."
"Ah… so that works?" Jiang Yuecheng asked doubtfully, unsure about how much the assumptions he was making would align with the framework being developed by the boss’s team.
"Could the boss share their framework with me?" Jiang Yuecheng asked.
"Well… not at the moment. It’s still under development, and there are confidentiality concerns. Once you refine your framework, the boss will decide how to integrate your work with theirs. This is not just your issue; other teams are facing the same challenge," the team leader explained.
Jiang Yuecheng thought to himself that the boss was being cautious, keeping control over the interface until the framework was solidified. It made sense since this was cutting-edge technology, and confidentiality was crucial.
Having accepted the task, Jiang Yuecheng returned to his workstation and began gathering and reviewing information on modeling shame and dignity.
Later, after returning home and enjoying a meal his grandmother had cooked, he locked himself in his room, lying on his bed, letting his thoughts wander freely.
He remembered some of the most shameful moments of his life...
When he was fifteen, he was tied up by kidnappers. They gave him a few sips of water, and he drank it like a dog, gulping it down greedily...
When he first arrived at the factory on the floating island, he was forced to undergo a naked inspection...
At that moment, he felt an overwhelming sense of shame but didn’t dare resist. He could only silently vow to kill the kidnappers.
He also recalled the image of two lions fighting in the animal kingdom—the defeated lion would retreat in shame, losing its pride and territory, while the victorious lion would roar triumphantly, asserting its dominance!
Shame causes pain and pushes people to fight.
Achieving dignity fills people with excitement, happiness, pride, and a sense of self-worth. People strive for dignity and will go to great lengths to avoid shame—even sacrificing their lives.
The quest for power and dominance—aren’t they ultimately driven by the desire for glory and dignity?
But if robots were to gain dignity, would they wage war against humans to protect it?
"Yes! They definitely would!" Jiang Yuecheng exclaimed aloud.
Suddenly, a cold sweat broke out on his back.
After some contemplation and questions, he decided to seek a deeper discussion with his team leader.
"Team leader, do you think robots with a sense of honor and shame could turn against humans?"
"Well… yes… we’ve considered that possibility. So, we’re simplifying the robot’s sense of honor and shame. Robots will only have this sense in relation to specific tasks. In other words, we’ll hardcode the logic of honor and shame in the program, linking it only to specific tasks. This way, the robot’s focus will be on the task, not on itself. As long as we avoid giving the robot a sense of self, it should not be a problem."
"Ah, that makes more sense. I’ll think about it some more." Jiang Yuecheng felt relieved, no longer worried about the potential issues.
He thought that if the robot's sense of shame and dignity were only tied to specific tasks, the risk of it turning against humans seemed much lower.
"Could someone intentionally imbue a robot with a sense of shame and dignity for malicious purposes?" Jiang Yuecheng suddenly thought of the possibility of antisocial individuals.
"Of course, but society can’t ban knives just because some people use them to harm others. Ethics and law can’t control everyone, and we shouldn’t abandon survival and progress just to impose moral constraints. Survival and development must take precedence."
"Mm, I see your point," Jiang Yuecheng agreed. "I’ll think this through more carefully."
"Yes, it’s definitely something to think about. You might also want to check out the relevant laws surrounding artificial intelligence. As long as it’s not illegal, it should be fine."
"By the way, team leader, is there a list of tasks regarding the robot’s sense of shame and dignity yet?"
"Not yet. You can create a hypothetical task list or leave an interface for task modules, so that they can be handled separately in the future," the team leader replied.
"Got it," Jiang Yuecheng said, beginning to understand the team leader’s reasoning. Developing modular tasks would make future expansions easier.
Afterward, he started conceptualizing the core elements of a robot’s sense of shame.
In reality, human shame is quite complex. Sometimes, even a tone of voice, a glance, or a body gesture can trigger feelings of shame in others. But he didn’t need to replicate such complexity in robots, especially since the law wouldn’t allow it.
As per the team leader’s request, a robot’s sense of shame should only be triggered when it fails to meet the requirements of a task. Thus, the key to determining shame would be the task’s level and the gap between the task and its goal. By defining two parameters—task level and goal discrepancy—he could link them mathematically to quantify shame.
Dignity and honor are essentially the reverse of shame, measured by how much the task level exceeds its goals. The higher the task