The meaning of life: What makes life worth living (in one sentence)

Who invented the question about ’meaning of life’?

Although we think about it as some sort of eternal question asked since the dawn of mankind, actually the first recorded use of the phrase ’meaning of life’ occured only in 1834. Who was responsible for inventing the phrase? And why did the question become so burning in the 19th century that Schopenhauer, Kierkegaard, Tolstoy and others all made it into one of the central questions of their philosophies?

Also, a survey of 134 countries around the world shows that the richer the country, the more likely its citizens answer negatively to the question “Do you feel your life has an important purpose or meaning?”

Why is this so? Why do people in more wealthy countries find it harder to identify the meaning of their life? And where can we members of modern Western secularized societies find meaning?

Find an answer to all these questions from this TedX HelsinkiUniversity talk I recently gave:

Also, the talk ends with a one-sentence answer to the grand question about meaning of life.

Education as a surrogate for war in the fight for a more compassionate future: Three ways to enhance human capacity for care and kindness

My grandfather was only 19 when the Soviet Union aimed to invade Finland in 1939 and he was sent to the front line to defend the independence of his home country. Of the next four years of his life, most was spent in trenches. What he brought with him from there was not only a strong sense of duty towards serving his home country, but also a deep sense of mutuality and care towards his fellow citizens. Living in the trenches with a mix of people from all sorts of social backgrounds gave him an understanding that we humans are all similar and all deserve to be treated with respect. This was a lesson he remembered well when he later became the CEO of Finland’s largest steel company. He was remembered as a boss who treated the employees fairly and was willing to take care of their well-being way beyond what the legislation would have required.

We need the moral equivalent of war. This was what the American philosopher William James called for in a speech delivered at Stanford University in 1906. War had the power to rally people around a common cause and bring a sense of unity and mutual care. As a pacifist, James aimed to find a way to bring these qualities forth without the necessity of finding an enemy to kill. Instead, his home country went through a period of three severe crises: World War I, the Great Depression, and World War II. While these brought devastation and millions of deaths, the period afterwards was indeed a period of greater unity: Both economically and politically, United States has never been as committed to mutual welfare than in the decades after the second World War.

Now, we are living through a period of record-breaking economic inequality combined with increasingly polarized political divisions. Nobody wants a war, so the question becomes: How can we develop a sense of unity and a stronger sense of compassion towards our fellow human beings in a time of peace?

John Dewey, the successor of William James as the leading American philosopher in the beginning of the 20th century, already knew the answer: To enact a societal-level change in values or human character, education is the key. School is the place where you can reach a whole generation and shape what they will become.

Unfortunately the agenda of the school is nowadays too often shaped by short-term economic interests. In this narrow-minded agenda, it is often forgot that one of the key tasks of the whole educational system is to grow children into citizens, into adults capable of administrating a democratic state. The democratic crisis that we are currently seeing not only in the US, but in other countries as well, is partially due to this neglect in the curriculum.

In order to build citizens, we need to build their character. This truth was already recognized by Aristotle as well as by the founding fathers of United States. However, since the WWII we have lost touch with this tradition. Being a citizen has reduced to having certain rights, instead of also requiring certain virtues.

Thus, to defend democracy, to build citizens, and to strengthen people’s capacity to experience compassion, three things should be made a mandatory part of curriculum from pre-school to universities (adjusted, of course, to the developmental level of the students):

1) Reading books and engaging with other forms of art

Philosopher Martha Nussbaum sees that there is nothing better than a good book in building our capacity to put ourselves in others’ shoes. She argues that the value of teaching humanities is due to the fact that through great art — a good book, movie, play and so forth — we learn to see the world from the point of view of others. My protected middle-class childhood couldn’t be further away from the challenges that Baltimore inner city youth must go through. But through watching the award-winning TV series The Wire, I can at least get a glimpse into that life. Even as this understanding is always partial, it expands our moral horizons and grows our capacity for compassion. And compassion for fellow citizens is a necessary requirement for a functioning democracy.

2) Designing meeting spaces across demographics

Given that neighborhoods and other social spaces have become increasingly segregated, we have less exposure to people from different social backgrounds than our own. This is one of the key reasons behind the current ‘empathy gap.’ To counter this development, one should build deliberate meetings between various groups into the curriculum. Our trust and compassion towards others is built in everyday encounters. Accordingly, just the chance to meet with others, work on a common project, engage in sports or arts together, will remind one that beyond the surface differences, we are all humans after all. When stereotypes and prejudices have already being cemented, it is hard to push people to open-heartedly meet each other. Thus, the earlier we are able to make people connect, the better.

3) Exercising mindfulness and meditation practices

Based on an increasing amount of scientific knowledge, we nowadays know that practicing mindfulness meditation is associated with various positive outcomes as regards practitioners’ health and well-being. However, it can also serve as a surprisingly powerful way of cultivating compassion. Mindfulness meditation training can increase participants’ prosocial behavior afterwards, even when delivered by a smartphone app. As regards children and youth, there has been a limited number of high-quality randomized controlled trials (considered the ‘gold standard’ in scientific research) on the topic, but the few existing studies are encouraging: Brief forms of mindfulness practice could improve the children’s social skills and school-related functioning. For example, an 12-week mindfulness-based Kindness Curriculum in preschool strengthened children’s social competence and social-emotional development while leading to decreased selfish behavior. The benefits of such programs seem to be especially pronounced for youth with academic or behavioral problems — i.e. those people who most need support.

Beyond education, these same three practices are, of course, effective in any other context as well. As adults, as citizens, as employees and employers, we need to strengthen compassion. And exercising mindfulness, encountering people from different social backgrounds as well as encounters with books and arts can help here.

But ultimately, we need even more.

“My religion is compassion.” This is what James Doty, professor of neurosurgery at Stanford University and the founder of Center for Compassion and Altruism Research, claims at the end of his autobiographical book. His life experiences had thought him that what he wants to manifest most through his actions is “a world where people not only did not harm to one another but reached out to help one another.” And he is not alone. Charter for compassion was drafted in 2009 and has been signed by over 2 million people, including such well-known names as archbishop Desmond Tutu, Nobel Peace Prize laureate Martti Ahtisaari, H.H. Dalai Lama and Mohammad Ali. The charter notes how “compassion lies at the heart of all religious, ethical and spiritual traditions” and calls us to “restore compassion to the centre of morality and religion.” Regular religious practice — be it a prayer, a mass or anything else — is a powerful way to remind us about key values and strengthen the influence of the better angels of our nature.

Thus, what is ultimately needed is to make compassion into a religion. People would be better of if they were part of a community that would help them to cultivate their capacity for compassion. Human kind would be better of if most of us would be part of such communities. Building such communities and practices — both within existing religions and outside of them — thus holds the potential to help each of us better realize the inherent potential for compassion that we carry as part of our human nature. Building such future is not only necessary but (morally equivalent to) a battle worth fighting for.

This post was inspired by the speeches I heard and the conversations I had at the Compassion in the Age of Disruption -summit at the University of Edinburgh I attended as one of the speakers on Dec 1st.

As a parent, what you need to know about whether smartphones are destroying the youth

Jean Twenge, psychology professor at San Diego State University, caused an uproar with her recent article in the Atlantic, in which she argued that ”it’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.” While parents and media quickly caught on to the story, the scientific community has been more skeptical.

In particular, Sarah Cavanagh pointed out in Psychology Today blog post that there are three problems in Twenge’s arguments, concluding that instead of there being a crisis, ”the kids are gonna be ok.”

So who is right, Twenge who see that the kids are facing ”worst mental-health crisis in decades” or Cavanagh who ensures us that there is nothing to worry about, the kids are gonna be ok?

To answer that question, we need to proceed in two steps: 1) Is there a mental health crisis? 2) What is the most compelling explanation for the potential crisis?

First, let’s look at statistics on depression. According to the National Survey on Drug Use and Health 2015, there indeed has been a rise in depression that started around 2012. Between 2004 and 2012, there were no big changes in youth depression. From 2006 to 2011 the percentage of youth aged 12 to 17 having experienced a major depressive episode remained stuck between 7.9% and 8.2 %. Then something began to happen and the percentage has been on a constant increase, being 12.5 % in 2015. That translates into roughly a 50 % increase, or some million more young people suffering from depression.

So here Sarah Cavanagh seems to be plainly wrong, and actually behaving quite irresponsibly. Completely ignoring the nationally representative statistics that show clear increases on depression and that the suicide rates for teen girls have hit a 40-year high, she asks us to trust what she states is her “suspicion” that ”the kids are gonna be ok”. She should be telling that to the parents of those three million kids currently struggling with depression, or to the parents who are currently desperately trying to prevent their suicidal kids from turning their thoughts into action.

So, unless there are some other nationally representative statistics contradicting these findings, we can conclude that something is happening to today’s youth, and their mental health is deteriorating quicker than in any point in the last 14 years.

Next up: What is the cause for this decline in mental health?

Twenge, as noted, argues that smartphones might be one of the major culprits. They were introduced around the same time as the declining trend in the mental health started. She also cites a bunch of correlational research that demonstrate a connection between screen activity and declined mental health. Here’s Twenge:

“There’s not a single exception. All screen activities are linked to less happiness, and all nonscreen activities are linked to more happiness. Eighth-graders who spend 10 or more hours a week on social media are 56 percent more likely to say they’re unhappy than those who devote less time to social media. Admittedly, 10 hours a week is a lot. But those who spend six to nine hours a week on social media are still 47 percent more likely to say they are unhappy than those who use social media even less. The opposite is true of in-person interactions. Those who spend an above-average amount of time with their friends in person are 20 percent less likely to say they’re unhappy than those who hang out for a below-average amount of time.

”Teens who spend three hours a day or more on electronic devices are 35 percent more likely to have a risk factor for suicide, such as making a suicide plan. (That’s much more than the risk related to, say, watching TV.”

”Once again, the effect of screen activities is unmistakable: The more time teens spend looking at screens, the more likely they are to report symptoms of depression. Eighth-graders who are heavy users of social media increase their risk of depression by 27 percent, while those who play sports, go to religious services, or even do homework more than the average teen cut their risk significantly.”

So this proves that smartphones are destroying the mental health of the youth? Not so quickly! As Twenge herself notes, these are correlational data so we cannot tell which causes which. Does spending time online increase depression? Or do depressed people spend more time online? Or is there a third factor causing both of these phenomenon? Perhaps being in bad physical shape leads to both depression and spending more time online. With correlational data, we can’t tell which of these explanations is true.

This is the gist of Cavanagh’s critique. She points out the correlational nature of the data. She also suggests that Twenge is cherry-picking data, ignoring some studies that demonstrate how social media use is linked with positive outcomes like resilience. Finally, she notes that various contextual and personality factors propably play a role: Some ways of using smartphones might be healthier than others, and some types of persons might be less susceptible to the negative effects of social media use.

These are all fair critiques and as a researcher I completely agree with her that we need research that would investigate causality and potential moderating factors.

What Cavanagh is ignoring here is the difference between science and the real world.

As researchers, we have time to wait for future research making the direction of causality more clear and identifying all the contextual, personality and other factors moderating the effects. As parents and policy-makers we can’t just notice that depression- and suicide-rates are going up, and decide to do nothing for the next ten years while we wait for the research results to come in.

As a parent of a teenager, one doesn’t have the privilege of waiting, one has to act now. Now that the child is a teenager; now that the child is demonstrating signs of depression or suicidal thoughts; now that the child seems to affected by a mobile phone addiction. The same is true for the policy makers, school teachers, youth therapists and other professionals working with youth. They are facing the depressed teenagers today, not in ten years.

In real life, one has to react to the problems in the present moment. And for that, one needs to use the best possible evidence available at the present moment. We know for a fact that depression rates are going up so there is something causing this change. It could be many separate factors — the economic prospects, increased deprivation of sleep, too much helicopter parenting — and it could be several factors together.

As regards these potential factors, Twenge has presented quite a compelling case for why smartphone usage could be one of the major factors causing this trend. Of course, the case would be more compelling if she could offer more than two causal studies that would support it. But as a parent or a professional working with youth, I have to balance Twenge’s current account with how compelling are the currently offered arguments for other potential factors causing depression. And then I have to act.

So I agree with Cavanagh that we should not blindly believe that smartphones are the sole cause of youth depression. We should be careful in our conclusion. But Twenge has made a compelling case that in our practical endeavors, one key factor that we should be aware of and look more carefully into, is how much and in what ways the young people are using their smartphones.

In the best case, Twenge’s article would trigger proponents of competing accounts to write their own story, after which we could evaluate, which of the suggested factors seems today the most plausible candidate for causing the increase in youth depression.

In any case, I believe that Twenge’s article has done the nation a service. It has highlighted one potential cause of youth depression. We don’t yet know if it is truly destroying a generation. But we need to prepare also for that possibility. More probably, when moderately used and when used in the right way, smartphones and social media can have neutral or even beneficial effects for youth development and mental health. But for a significant minority of the youth, it can become an addiction and a source of mental health problems. We don’t yet know how to separate healthy smartphone use from unhealthy, but Twenge’s article has hopefully demonstrated that it is vitally important to figure that out. Both as researchers, as practitioners, and as parents.

Twenge used strong rhetoric in the article. But to get the attention this potentially very important matter deserves, I believe that was what was needed.

So my advice for parents — based on Twenge’s article and what little I know about the matter from other sources — is this:

  1. Be aware of that smartphones and social media usage could be an important factor influencing the mental health and life of the young people.
  2. Don’t fall into the trap of believing that smartphone usage would be ‘all good’ or ‘all bad’. The kids can find tremendous support from right social media groups. Or they could be cyber-bullied. It is as much about what one does with one’s phone as it is about how much one uses it.
  3. Pay attention to how and to what extent your own children are using their smartphones and social media. And how changes in these seems to influence their well-being.
  4. Make sure that the kids have other content to their life beyond smartphones and try to build ‘smartphone free’ periods into their everyday life.
  5. Follow the debate around the issue and remain open to readjust your opinions and policies as new evidence comes in.

P.S. All this being said, there are also aspects of Twenge’s narrative that I don’t find too convincing. She is right when she speaks about the dramatic increases in the suicide rates: ”Although the rate increased for both sexes, three times as many 12-to-14-year-old girls killed themselves in 2015 as in 2007, compared with twice as many boy.” However, as noted for example by Antti Summala, the statistics from CDC show that the current trend of increased suicides started already around 2007 and in that sense don’t support Twenge’s narrative of smartphones influencing it. Something else was already increasing the suicide rates before the introduction of smartphones.

Transcending postmodernism: How to March for Science without naively supporting objective truths?

When March for Science gathered hundreds of thousands of people in more than 600 cities across the world to march for “truth” and against “post-truth politics”, was it just a tribal gathering serving mainly to promote the marchers’ shared identity? Some like Dallas Cowboys, others like science, and both tribes like to flash their identities publicly.

According to consultant David Ropeik the “arrogant, smug, condescending” march was essentially a “giant intellectual middle finger at the people who, because of their values and experiences and personalities and tribal identities, hold views about climate change or vaccine safety or GMOs or evolution that conflict with what the evidence clearly says.” In banners yelling that ’science is real’, ’protesting for reality’, and ’truth should shape politics’ he saw a nostalgia for a ”naïve objectivist view of the world” that has long since been abandoned by any serious thinker.

True that. Ever since Immanuel Kant’s Copernican Revolution in Western philosophy, it has been clear that we can never access reality as such. Our perception is always limited and biased, shaped by our culture, upbringing, idiosyncratic experiences as well as various biological mechanisms. As Ropeik notes, the brain is not “a machine dedicated to dispassionate thinking” but ”principally a survival machine, and as social animals we survive best when our views match those in the tribe(s) with which we most closely identify.”

Given the essential epistemological shortcomings inbuilt in the human condition, there simply isn’t any fixed point upon which all truths could be built. Human life is an ongoing stream of experiencing from which we seem to be able to identify certain regularities. But elevating any such found regularity into “an objective truth” is, in the best case, just naïve, and in the worst case is used to justify violence against other tribes. Millions of people throughout the history have been massacred by perpetrators believing that the truth is on their side. Science has been used to justify eugenics, racial discrimination, austerity and trickle-down economics, as well as labeling of homosexuality as a disease.

This is what postmodernism got right. In unveiling how power structures permeate our society and discourses, in revealing hidden assumptions in our language, and in deconstructing the particularities behind the grand narratives and self-evident truths of modernity, thinkers such as Lyotard, Foucault and Derrida (although latter two might not like the label ‘postmodernist’) have done a significant service to the society. Ever since Kant and his Copernican Turn, the brightest minds in philosophy — think Hegel, Russell, Wittgenstein — have tried their best to avoid the inconvenient conclusion that there is nothing objective. Their failure stands testimony for the truth of there being no essential truths. In a way, postmodernists, Derrida, Foucault and their likes just took Kant’s thinking to the logical conclusion he was unwilling to accept.

So: If epistemological relativism means that there are no objective truths or “true facts” and if moral relativism means that there are no objective moral values or principles, then these relativisms are “truths” in as strong sense as is possible in this disillusioned and postmodern time.

But since I first encountered postmodernistic thinkers in my undergraduate years, one question has been burning in my mind: What then? What then when we have acknowledged the essential particularity and culturally inherited nature of our believes and values? What then when we have acknowledged that no tribe can claim access to any objective truths? What then when we have acknowledged the important role that power structures, language, identities, cognitive biases and othe factors play in our thinking and public discourses?

The necessity of decision-making and choosing is impossible to avoid. Even in the postmodern era, we need to make practical decisions both in our everyday life and in our groups, organizations, and societies. How are we to make these decisions? In the political sphere, when we have listened to all possible parties, how are we to evaluate the various arguments given by these parties? How to decide which arguments are good and which bad? How to choose whose arguments to take seriously and whose arguments to ignore? When push comes to shove, we need to make a choice. And in order to do that, we need both some values and some epistemological standards.

Stating that every argument is as good as another is essentially giving up. If naïve modernism is about a belief in objective truths and forgetting all the factors making such objectivism impossible, then naïve postmodernism is about a belief in subjective truths and completely forgetting the necessity of making practical decisions both individually and in groups and societies. And postmodernism can be as harmful as modernism. By refuting reason and universal values, postmodernism can come to privilege “inconsistency, irrationalism, zealous certainty and tribal authoritarianism,” as Helen Pluckrose recently acknowledged. One symptom of such return to tribalism and identity politics are the riots on US campuses against speakers who challenge campus orthodoxy. Jonathan Haidt warns us that “when tribal sentiments are activated within an academic community, some members start to believe that their noble collective ends justify almost any means, including the demonization of inconvenient research and researchers, false accusations, character assassination, and sometimes even violence.”

The impotence of seeing everything as yet another perspective without any tools to evaluate or compare them is the elephant in the room as regards postmodernism. When confronted about it, proponents of postmodernism usually get very defensive and start muddying the waters. A typical tactic — to the point of being a cliché — is to remind the inquirer about how ‘complex’ and ‘different’ the various postmodernist thinkers are, and drop the names of some ten books that the inquirer should read. Postmodernists are good at taking the conversation backwards, but they are terrible at taking a conversation forwards, towards real-life solutions. The sharpness of their ability to critique and deconstruct hides an inability to construct and offer practical advice.

For a long time, I was on a very thin ice with my what then –question. Slowly, and mainly through the writings of John Dewey and other pragmatists, an answer started to emerge. Dewey, as other pragmatists, takes fallibilism as the starting point. As Peirce, one of the founding fathers of pragmatism defines it, “we cannot in any way reach perfect certitude nor exactitude. We never can be absolutely sure of anything.” Instead our knowledge “swims, as it were, in a continuum of uncertainty and of indeterminacy.” However, unlike postmodernists, pragmatisms don’t wallow in this post-objectivism. Instead, they acknowledge the active nature of the human condition. As human beings we can never escape our embeddedness within the world of experiencing into which we are thrown as actors. “Action … is the way in which human beings exist in the world,” as Hans Joas puts it.

Given the active nature of human living, the ultimate function of our beliefs and convictions is not to neutrally depict the world, but to offer us guidance in our living. Our beliefs are maps we build in order to fruitfully navigate the constantly unfolding experientally encountered world we seem to share with others. Increased knowledge is not about getting the correct “representation of reality in cognition” but is an expression of an “increase of the power to act in relation to an environment”, to quote Joas again. Human history has been a journey towards designing both better maps to navigate the environment and towards inventing better ways to design such maps. In the latter front, scientific methods are humankinds foremost achievement. All scientific methods have their shortcomings, and all scientific ‘knowledge’ has reservations, yet science is still our best answer to overcome the particularities, biases, and self-servingness of individual viewpoints.

This gives us a standard to use when evaluating the arguments of various people and tribes. Following the guidance of some arguments simply has a higher probability of advancing the human goods than following some other arguments. That’s why we should follow the former, while giving less weight to arguments that have a high probability of having negative effects for the human goods.

There is progress in bridge-building: Modern bridges simply are able to span distances impossible for medieval bridges. In the same sense there is progress in human societies: Some societal arrangements are better able to advance the human goods of society members.

This of course brings us to the next question: What are the human goods? A pragmatist avoids taking this for granted either. Instead, in the same sense as constant inquiry can lead us to have a more reliable understanding of the world around us, the same inquiry can lead us to have a more reliable understanding of the human goods. In other words, human goods are something that can only be identified through inquiry. And for such inquiry we need to study both human psychology as well as our biological nature, not forgetting to overcome the biases inherent in conducting this inquiry only within a certain culture. Thus anthropological records, studying the scriptures of various cultures and a more general dialogue between cultures are all essential for building an understanding of the human goods. Accordingly, we can have more or less reliable accounts of the human goods and the values we should strive for as individuals and as societies. And although this knowledge is always fallible and incomplete as well, the necessity of making practical decisions forces us to rely on the best currently available account of the human goods in making such decisions.

So this is the mission of science then: To build more reliable maps of the world to better guide us towards human goods, while at the same time aiming to find more reliable understanding of what these human goods are in the first place.

So let’s return to my ‘what then’ question: How are we to make personal and political decisions in the postmodern ‘post-truth’ era? The answer is not to ‘go back’ to modernity and naïve believe in the power of science or ‘rationality’ to uncover objective truths. But the answer is not either a postmodernist return to tribal identities where every argument and perspective is as good as another. Instead, while acknowledging the inherent shortcomings of various forms of inquiry, we should nevertheless acknowledge the ability of some forms of inquiries to give us more reliable tools that aid us in navigating towards having more of the human goods realized in our lives. And accordingly, in political decision-making, we should privilege the arguments that are the fruits of such inquiries.

So next time a ‘march for science’ is organized, I don’t want to march for ‘truth’ but for something like this:

“In support of using best available evidence, which is usually attained through scientific methods, while recognizing the shortcomings of any warranty-building method and being open to revise one’s convictions in the light of new evidence, all in the service of advancing human goods, while also constantly revising our understanding of such goods.”

Not easy to fit that into a banner, though.

For more on pragmatism and how it transcends postmodernism, here a few more academic articles on the topic:

Martela, F. (2015). Pragmatism as an attitude. In U. Zackariasson (Ed.), Nordic Studies in Pragmatism 3: Action, Belief and Inquiry – Pragmatist Perspectives on Science, Society and Religion (pp. 187–207). Helsinki: Nordic Pragmatism Network.

Martela, F. (2015). Fallible inquiry with ethical ends-in-view: A pragmatist philosophy of science for organizational research. Organization Studies, 36(4), 537–563.

Martela, F. (2017). Moral Philosophers as Ethical Engineers: Limits of Moral Philosophy and a Pragmatist Alternative. Metaphilosophy, 48(1–2), 58–78.

Revealed By Science: The 4 Elements Of Holy Grail That Jay-Z And Justin Timberlake Are Searching After

In radio right now: “And baby, it’s amazing I’m in this maze with you. I just can’t crack your code.” Don’t worry, Mr. Timberlake, I am here to crack the code for you. You just need to do what MC Hammer did: become a bit more geeky!

You curse my name
In spite to put me to shame
But I still don’t know why
Why I love it so much?

We’ve been told by Mr. Jay (Z) that in the lyrics of the song Holy Grail, Mr. Timber (lake) is talking about his love/hate affair with fame. In the same song, Mr. Jay himself complains how he is “caught up in all these lights and cameras” and ready to “f**k the fame.”

Both of them seem to be confused: How did they end up in this horrific maze of fame? And they still don’t know why they love it so much, even when it is sometimes so painful. Fortunately, the right answers are out there. They have just been hiding in the laboratories of mischievous – and less famous – scientists. So what can they tell to Mr. Jay and Mr. Timber about sustainable happiness?

Consider this: A few psychologists I know from University of Rochester (341 miles from Brooklyn) asked students graduating from college what they want to get in life. Some of the students had cozy dreams about satisfying close relationships, personal growth and serving the community. Other’s were all bling-bling, and wanted the infamous trio of money, fame, and image. And alas: one year later it turned out that both groups had taken some successful steps towards their goals: inner growth people had experienced inner growth, while fame people were a bit more famous. This seems to proof the theorem set forth by professor Eminem from 8 Mile Road University in his highly cited paper Be Careful What you Wish for:

So be careful what you wish for, cause you just might get it
And if you get it then you just might not know what to do with
Cause it might just come back on you ten-fold.

As professor Eminem argues, we should be very careful about our dreams. The truth is out there: All goals are not created equal. The research shows that achieving some goals produces well-being, while achievement of other goals produces – well – ill-being.

What are then the sources of sustainable happiness? What are the goals that produce true happiness?

There are four of them:
1) Having a sense of freedom and autonomy in one’s life
2) Feeling competent at what one is doing
3) Having satisfying close relationships
4) Being able to contribute to the society

The key problem with too much fame is that while 2 & 4 might be satisfied, too much fame can completely trump 1 and 3.

Let’s take Mr. Jay as an example:

1) Feeling Free: He can buy an island for his girlfriend as a birthday present, but at the same time: “can’t even take my daughter for a walk, see ‘em by the corner store.” He has certain freedoms others can just dream about, but at the same time he has been deprived of many freedoms that are self-evident for ordinary people: Being able to visit a corner-store, walk around freely on streets of Brooklyn – or any other neighborhood on this planet.

2) Feeling Competent: Hats off! Mr. Jay is ambitious, talented, and disciplined. He is the “post-millennial embodiment of the American Dream”, who won the game of making money out of hip hop. As regards competence, he is way up there!

3) Feeling Related: Having a sympathetic wife and a lovely daughter is great. But given that both he and his wife are quite dedicated to their careers, they might not have as much quality time together as your average Joneses. In addition, Mr. Jay complains how he is surrounded by pigeons. I am not an ornithologist, but Mr. Jay seems to have some knowledge about the behavior of pigeons: “But soon as all the money blows, all the pigeons take flight.” Finding friends when everybody around you is a pigeon? Not cool.

4) Contributing: A bit mixed really. Mr. Jay gives to charities, serves as a role model, and organizes cool things like Made in America festival. Harry Belafonte (the Banana Boat Song guy), however, criticizes him for turning his back on social responsibility. He thinks that Mr. Jay could do so much more with his high profile status and a net worth of 450 millions. In that sense, Mr. Belafonte feels that Bruce Springsteen is more black than Mr. Jay.

The point being: The fame itself doesn’t make anyone happy or unhappy. As regards happiness, fame helps only to the extent to which it helps to fulfill the four needs of sustainable happiness. And while having no money hurts, having too much money and fame can hurt too. It’s of course nice that if one “just want a Picasso in my casa, no, my castle”, one can buy it. But it is not nice when living a normal life becomes impossible:

“I feel like I’m cornered off enough is enough, I’m calling this off
Who the fuck I’m kidding though, I’m getting high, sitting low
Sliding by in that big body, curtains all in my window
This fame hurt but this chain works.”

Ok, now we know why Mr. Jay and Mr. Timber both love and hate fame at the same time. But where to go from here? What should they do to break loose and find that holy grail of sustainable happiness?

Mr. Jay asks us to look at “what that s**t did to Hammer”. So let’s look at what happened to MC Hammer!

For those born in the ’90s, MC Hammer was a guy who twenty years ago instructed us to not “Touch This”, leaving us wondering what exactly it is we can’t touch (and is it something we would like to touch in the first place?) He was huge in 1990! And surely, Mr. Hammer went through the usual cycle: huge fame, huge money, huge mansion in Fremont, California. And then the backlash: bankruptcy, loosing the mansion, out of fashion.

But what does Mr. Hammer do now?

It seems that he is living the good life with his wife and six kids while putting in some occasional missionary work for the local church. As for work, he is investing in and consulting tech companies, calling himself a “super-geek.” And he is right: he is definitely less cool when he talks about the user interfaces of search engines at Web 2.0 Summit than when he rapped about 2 Legit 2 Quit wearing Ray-Bans.

In a nutshell, Mr. Hammer in 2013 is less cool, less famous, but more happy.

Let’s break his life into the four building blocks of sustainable happiness to see how he has found his own holy grail of sustainable happiness:

1) Feeling free: Less fame means more freedom to walk on the streets and have the benefits of normal life that superstars are deprived off. Still, he is so well off that he can do most of the things he likes, like traveling or having a nice house.

2) Feeling competent: He can still do it if he wants, for example mashing it up with PSY at the American Music Awards. And he is getting more competent in the geeky stuff as well.

3) Feeling related: Having been together with his wife for over 25 years, and having six children certainly is a good start in having satisfying close relationships in one’s life. Also in his work life he seems to be surrounded with fellow geeks he loves to hang out with.

4) Contributing: His work at the church as well as the way he helps tech startups both seem to give him a strong sense of being able to contribute towards the society and other people.

That’s sustainable happiness, isn’t it! To get there, Mr. Hammer obviously needed to “Stop” before the new “Hammertime” started. He did that, found what is truly valuable in life, and is now living a more peaceful, less famous, but much happier life.

So don’t worry Mr. Jay and Mr. Timber, there is also hope for you. Just become a geek – less cool, more happy!