This story was originally published on HackerNoon at:
https://hackernoon.com/empathy-in-ai-evaluating-large-language-models-for-emotional-understanding.
A discussion of how LLM base AIs vary dramatically in their ability to both be empathetic and identify empathy.
Check more stories related to machine-learning at:
https://hackernoon.com/c/machine-learning.
You can also check exclusive content about
#ai-research,
#pi-ai,
#replika,
#willow-ai,
#empathy-in-ai,
#human-ai-interaction,
#emotional-intelligence-in-ai,
#hackernoon-top-story, and more.
This story was written by:
@anywhichway. Learn more about this writer by checking
@anywhichway's about page,
and for more stories, please visit
hackernoon.com.
This post is a follow-up to Hackernoon article [Can Machines Really Understand Your Feelings? Evaluating Large Language Models for Empathy] In the previous article I had two major LLMs respond to a scenario designed to elicit empathy in a human under varying system prompt/training conditions. In this article I reveal what LLMs behaved in what manner, provide my own opinion and include some observations.