Participants can then chat to each other, as per the countless other chat platforms out there, or type “@ChatGPT” to prompt the AI to enter the conversation by, say, recommending somewhere to eat or an itinerary for a travel day you’re discussing.
But even in a simple two-person chat with colleague Tom Raynel, I found it took a couple of goes to generate a screen-grabbable exchange.
At first, we both typed prompts at once, which was enough to jumble up the chat. It’s easy to see it turning into a riot of prompts and responses on different tangents if there’s a half-dozen people in a chat.
(Incidentally, on our take one, ChatGPT recommended Toto – a tasty pizza joint in our building that we do in fact use for company takeout; on take two, it suggested an option on the other side of the CBD.)
But wait, there’s more
OpenAI’s publicity for ChatGPT group chat says: “It won’t just suggest a restaurant; it can book the table, share the details and help co-ordinate what happens next.”
That sounds cool, but I was dubious, given chatbots from various Big Tech firms can do this in select US cities but fall down in other markets.
Sure enough, in another group chat with Tom, we decided on a curry. So I typed the prompt: “@ChatGPT Please book a table for four at Indian Summer in Hobsonville Point.”

It replied: “I can’t directly place the booking for you, but I recommend you call the number above [which it had copied from the restaurant’s website] and request a table for four.” Sure, clanker. Let the meatball do all the work.
To be fair, ChatGPT also said, “If you like, I can check availability for tonight/us and help you pick a time — do you want me to do that?”
So I asked it “@chatgpt Please check if Indian Summer in Hobsonville Point could take an online reservation for four people today at 7pm.”

After a good minute of thinking, ChatGPT replied to the group chat the restaurant would be open at that time “But I can’t confirm the actual table availability”. The restaurant in question has a user-friendly online reservation system that took me a meatball about 15 seconds to check and find a booking was possible.
OpenAI’s PR team sent a pre-canned screenshot of a group of Wellingtonians using ChatGPT group chat to arrange a meal at an Italian restaurant (see image at top of story).
That could happen. But in real life, our capital hipsters would probably be welded to a WhatsApp group. A clanker’s view on the best Italian might not be enough to peel them away.
One of our NZME AI experts said he saw potential if it was a two-person chat with a specific goal in mind. That sounds more like it. He’s often doing tightly-focused technical stuff.
A second issue: even though the group chat feature had been out for a couple of days by the time I wrote this, none of the AI commentators I approached had used it.
Nor had several aficionados in our office. Kiwis are a reticent species, both bird and human. Perhaps we don’t make an ideal pilot group. Typing a prompt to an LLM can be quite a personal thing. Often you’re asking ChatGPT (or Gemini or Claude) stupid questions.
Using you for training by default
A third: it’ll be natural for people to have privacy heebie-jeebies, given different ChatGPT users can choose different versions and different settings.
Your personal ChatGPT “memory” (record usage history) is not captured in a group chat.
But an OpenAI spokeswoman said ChatGPT has access to the messages, files, images and links shared in that group, along with participants’ names and profile photos and the group’s custom instructions (various how-tos and write-ups say you have to upload a profile photo to join a chat; I was able to join without one).
OpenAI will use your group chat for training its large language model unless a participant opts out by disabling the “improve the model for everyone” setting in their ChatGPT data controls. The choice to opt out is buried, not flagged.
It cannot see your private 1:1 chats, your personal memory, or your account-level instructions.
OpenAI says if someone under 18 joins the chat (like its peers, it takes age on faith), ChatGPT automatically reduces exposure to sensitive content for everyone in the chat.
Which version? Whose usage limits?
OpenAI says group chat responses are powered by GPT-5.1 Auto, “which chooses the best model to respond with based on the prompt and the models available to the user that ChatGPT is responding to based on their Free, Plus, Go, or Pro plan” – so maybe the member of the chat with the best ChatGPT plan should throw in the AI prompts.
Another wrinkle. OpenAI says: “Group members can remove other participants, with the exception of the group creator, who can only be removed by leaving themselves.”
Different ChatGPT users are subject to different usage limits, depending on if they’re on a free plan or one of the varying levels of paid accounts. that means someone could get neutered mid-chat.
OpenAI says “When ChatGPT responds in a group chat, the response counts toward the usage limit of the person whose message it’s responding to. Human-to-human messages do not count toward ChatGPT response limits.”
GroupChat could be good way of outsourcing a query to a mate if your own usage limit is exhausted.
OpenAI refused to say how many Kiwis use ChatGPT or how many had tried out the new Group Chat feature.
Postscript: Privacy complications
Tech Insider asked privacy expert Frith Tweedie for her take on ChatGPT group chat.
“Privacy positives include that group chats will be kept separate from personal chats and your personal ‘memories’ in ChatGPT are not shared into group chats,” the Simply Privacy principal said.
“But users should be careful with the sharing of links to a group chat because anyone with the link can join and view the full conversation history, including any files and images that have been shared.

“So if, for example, you invited a third party like a vendor to join an ongoing internal chat, they would have access to everything that’s been shared previously, which could include confidential or personal information – and that you might have forgotten about.”
If that were the case, there would be a risk of breaching Privacy Principle 11 of the Privacy Act 2020, which states private information can only be used for the purpose for which it was collected.
From an accuracy perspective, multiple participants could result in less clarity on who is responsible for verifying content, increasing the risk of people acting on incorrect outputs without oversight, Tweedie said.
Organisations that want to use this kind of group chat feature should think about updating their AI policies and guardrails, she said.
“For example, to make it clear that these features should only be used for low-risk, non-confidential collaboration with other staff members.
“And that external parties should not be included without setting some clear ground rules as to what can be shared.”
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.

