Ending a conversation with someone who just wants to argue can be tricky. You want to stay polite but firm, and most importantly, you want to shut down the never-ending debate. Whether it's a friend, ...
Anthropic has announced a new experimental safety feature that allows its Claude Opus 4 and 4.1 artificial intelligence models to terminate conversations in rare, persistently harmful or abusive ...
How can I extricate myself from chatting without seeming like I don’t want to talk to the person? I’m not alone in this convo conundrum. A Harvard study from just a few years ago found that ...
End-of-year (EoY) conversations often feel like performance reviews, but they are so much more than that. If done right, they ...
Claude Opus 4 and 4.1 can now end some "potentially distressing" conversations. It will activate only in some cases of persistent user abuse. The feature is geared toward protecting models, not users.
Humans have yet to master the delicate art of chitchat. Conversations seem to often run longer or shorter than people would like. People rarely want the exact same things from their conversations.
Annual reviews offer a rare moment of visibility. Learn how to approach yours strategically so it reflects your impact, ...
Character.AI, Nomi, and Replika are unsafe for teens under 18, ChatGPT has the potential to reinforce users’ delusional thinking, and even OpenAI CEO Sam Altman has spoken about ChatGPT users ...
I recoiled in horror recently at a story in the New York Times about using twitter-like tools in a high school classroom. The project is well-intentioned: they wanted to get kids more comfortable with ...
As humans, we’re talking to each other constantly. With all that practice, we must be pretty good at it—right? Not exactly. As a professor at Harvard Business School and author of Talk: The Science of ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results