Various enterprises and personal interests, such as Man-Machine Interaction (MMI), gesture studies, signs, language, social robotics, healthcare, innovation, music, publications, etc.

Category: Robot Ethics

Thoughts on Homo Deus

Today I prompted ChatGPT as follows:

Pretend you are (1) a law maker, (2) a philosopher, (3) an AI engineer and (4) a book reviewer. Please provide your opinion on the book Homo Deus by Yuval Noah Hariri in the form and style of a book review for your peers in a peer reviewed journal.

It generated:

1) As a Lawmaker

Title: “From the Legislative Lens: A Review of Homo Deus by Yuval Noah Hariri”

In “Homo Deus: A Brief History of Tomorrow,” Yuval Noah Hariri paints a future in which human beings ascend to godhood, aided by our technological prowess. His book is filled with thought-provoking ideas that challenge us lawmakers to reconsider our fundamental perspectives on human rights, data privacy, and the balance of power.

Hariri’s proposition of Dataism as the potential future religion is an exciting yet disconcerting idea, shedding light on the political complexities and ethical dilemmas that are starting to arise in our data-driven society. The concept prompts us to reflect on how we must adapt legislation to handle unprecedented issues, like data ownership, that may arise from such a transition.

His exploration of technology-driven social stratification is also crucial to our work. Hariri’s warning of a ‘useless class,’ or those unable to keep pace with rapid technological advancement, raises serious concerns about how we can promote social equity and justice.

Overall, “Homo Deus” is a must-read for lawmakers navigating the unprecedented legal and ethical challenges of an increasingly data-driven world.

2) As a Philosopher

Title: “Humanity’s New Gods: A Philosophical Critique of Homo Deus”

In “Homo Deus: A Brief History of Tomorrow,” Yuval Noah Hariri explores the implications of a post-human era. His work is dense with profound and intellectually challenging ideas, questioning humanity’s place in a future world defined by our own creations.

His hypotheses about the ‘human gods’ resonate with Nietzsche’s idea of the Ubermensch, yet with a technological twist. Hariri’s notions of Dataism extend beyond the scope of science into the realm of metaphysics, urging us to question the nature of consciousness, free will, and personal identity in a data-driven world.

However, Hariri’s assumptions about the human quest for immortality, happiness, and divinity could have been analyzed further through different philosophical perspectives, such as existentialism or Buddhist philosophy, which embrace mortality, suffering, and impermanence.

“Homo Deus” serves as an urgent call to philosophers to engage with the philosophical implications of rapid technological and scientific advances, to ensure humanity’s wisdom evolves alongside its knowledge.

3) As an AI Engineer

Title: “Molding Our Tomorrow: An AI Engineer’s Examination of Homo Deus”

Yuval Noah Harari’s “Homo Deus: A Brief History of Tomorrow” offers a captivating yet alarming vision of a future where humans are no longer the most intelligent entities. As an AI engineer, I find Hariri’s assumptions regarding the potential of artificial intelligence both exciting and contentious.

His prediction of human obsolescence due to advancements in AI and biotechnology is unsettling, yet valid. It underscores the importance of developing ethical AI systems that augment rather than replace human abilities, and ensuring that AI benefits are accessible to all.

Hariri’s Dataism hypothesis aligns well with the increasing influence of big data and AI in our society. However, it also underscores the importance of privacy, transparency, and explainability in AI systems, all critical factors for establishing public trust in our work.

While “Homo Deus” does not delve deep into the intricacies of AI development, it provides a valuable outsider’s perspective that reminds us, AI engineers, of the larger social and ethical implications of our work.

4) As a Book Reviewer

Title: “Foreseeing the Future: A Review of Homo Deus by Yuval Noah Hariri”

Yuval Noah Hariri’s “Homo Deus: A Brief History of Tomorrow” masterfully intertwines history, science, and philosophy to offer a speculative look at the future of humanity. Hariri’s writing style is engaging, accessible, and sprinkled with humor, which makes for a pleasant read despite the book’s weighty subject matter.

Hariri’s exploration of our potential future – particularly the rise of Dataism and the advent of ‘godlike’ humans – is a narrative full of twists and thought-provoking insights. The idea of a ‘useless class’ emerging due to technological advancements is particularly jarring and underlines the urgency for societal adaptations.

While Hariri’s predictions can at times feel overly deterministic, they undeniably provide valuable fodder for discussion and contemplation. His visionary insights demand our attention, urging us to critically assess the direction in which our society is heading.

In “Homo Deus,” Hariri invites us not only to foresee the future but to actively participate in shaping it, making the book an indispensable read for anyone interested in the human saga and our role in its unfolding.

Follow-Up Questions and Answers

I went further, see here, and once again I was amazed at how valuable ChatGPT is for me when I am trying to understand something. I can talk to it, ask questions, let it generate ideas and concepts based on suggestions, and all the while it never once needs a new drink.

Item over Zora en andere robots in de zorg bij CampusTV Utrecht

Naar aanleiding van een grote proef met de Zora robot (eigenlijk NAO met wat extra programmering door een Belgisch bedrijfje) was er een item op CampusTV van de Hogeschool Utrecht over robots in de zorg.

Ik was uitgenodigd als expert om commentaar te geven over robots in de zorg.



Zie Campustalk 07 Winter 2015-2016 https://youtu.be/qd8txYpq9GM (actie vanaf 3:30). Het verhaal van de verzamel-expert is trouwens ook leuk (aan het eind).


Robot Man: Noel Sharkey

I read a news item about robots on the Dutch news site nu.nl (here) about the ethics of letting robots take care of people, especially kids and elderly people. The news item was based on this article in ScienceDaily. Basically it is a warning by ‘Top robotics expert Professor Noel Sharkey’. I looked him up and he appears to be a man to get in contact with. He has, for example, called for a code of conduct for the use of robots in warfare (here).

Noel Sharkey

Noel Sharkey

According to his profile at the Guardian (for which he writes):

Noel Sharkey is a writer, broadcaster, and academic. He is professor of AI and Robotics and professor of public engagement at the University of Sheffield and currently holds a senior media fellowship from the Engineering and Physical Science Research Council. Currently his main interest is in ethcial issues surrounding the application of emerging technologies

I wholeheartedly agree with his views so far. He has a good grip on the current capabilities of machine vision and AI, neither of which I would trust when it comes to making important decisions about human life. At least when it comes to applications of speech and gesture recognition, with which I have had a lot of experience with, they simply make too many errors, they make unpredictable errors, and they have lousy error recovery and error handling strategies. So far, I only see evidence that these observations can be generalized to just about any application of machine vision, when it concerns the important stuff.

It reminds me of an anecdote Arend Harteveld (may he rest in peace, see here) once told me: Some engineers once built a neural network to automatically spot tanks in pictures of various environments. As usual with such NNs, they are trained with a set of pictures with negative examples (no tank in the picture) and positive examples (a tank in the picture). After having gone through the training the NN was tested on a separate set of pictures to see how it would perform. And by golly, it did a perfect job. Even if nothing but the barrel of the tank’s gun stuck out of the bushes, it would spot it. And if there wasn’t a tank in the picture the NN never made a mistake. I bet the generals were enthusiastic. A while later it occurred to someone else that there appeared to be a pattern to the pictures: the pictures with the tanks were all shot on a fairly sunny day (both in the training and testing pictures) and the pictures without tanks were taken on a fairly dreary day. The NN was not spotting tanks, it was just looking at the sky…

University of Sheffield
Wikipedia

Powered by WordPress & Theme by Anders Norén