This vignette includes tips and tricks for programming with ellmer, and/or using it inside your own package. It’s currently fairly short but will grow over time.
Chat objects are R6 objects, which means that they are mutable. Most R objects are immutable. That means you create a copy whenever it looks like you’re modifying them:
x <- list(a = 1, b = 2)
f <- function() {
x$a <- 100
}
f()
# The original x is unchanged
str(x)
#> List of 2
#> $ a: num 1
#> $ b: num 2
Mutable objects don’t work the same way:
chat <- chat_openai("Be terse", model = "gpt-4.1-nano")
capital <- function(chat, country) {
chat$chat(interpolate("What's the capital of {{country}}"))
}
capital(chat, "New Zealand")
#> Wellington
capital(chat, "France")
#> Paris
chat
#> <Chat OpenAI/gpt-4.1-nano turns=5 tokens=53/4 $0.00>
#> ── system [0] ──────────────────────────────────────────────────────────────────
#> Be terse
#> ── user [19] ───────────────────────────────────────────────────────────────────
#> What's the capital of New Zealand
#> ── assistant [2] ───────────────────────────────────────────────────────────────
#> Wellington
#> ── user [13] ───────────────────────────────────────────────────────────────────
#> What's the capital of France
#> ── assistant [2] ───────────────────────────────────────────────────────────────
#> Paris
It would be annoying if chat objects were immutable, because then you’d need to save the result every time you chatted with the model. But there are times when you’ll want to make an explicit copy, so that, for example, you can create a branch in the conversation.
Creating a copy of the object is the job of the $clone()
method. It will create a copy of the object that behaves identically to
the existing chat:
chat <- chat_openai("Be terse", model = "gpt-4.1-nano")
capital <- function(chat, country) {
chat <- chat$clone()
chat$chat(interpolate("What's the capital of {{country}}"))
}
capital(chat, "New Zealand")
#> Wellington
capital(chat, "France")
#> Paris
chat
#> <Chat OpenAI/gpt-4.1-nano turns=1 tokens=0/0 $0.00>
#> ── system [0] ──────────────────────────────────────────────────────────────────
#> Be terse
You can also use clone()
when you want to create a
conversational “tree”, where conversations start from the same place,
but diverge over time:
chat1 <- chat_openai("Be terse", model = "gpt-4.1-nano")
chat1$chat("My name is Hadley and I'm a data scientist")
#> Hi Hadley. How can I help?
chat2 <- chat1$clone()
chat1$chat("what's my name?")
#> Hadley
chat1
#> <Chat OpenAI/gpt-4.1-nano turns=5 tokens=68/11 $0.00>
#> ── system [0] ──────────────────────────────────────────────────────────────────
#> Be terse
#> ── user [23] ───────────────────────────────────────────────────────────────────
#> My name is Hadley and I'm a data scientist
#> ── assistant [9] ───────────────────────────────────────────────────────────────
#> Hi Hadley. How can I help?
#> ── user [13] ───────────────────────────────────────────────────────────────────
#> what's my name?
#> ── assistant [2] ───────────────────────────────────────────────────────────────
#> Hadley
chat2$chat("what's my job?")
#> Data scientist.
chat2
#> <Chat OpenAI/gpt-4.1-nano turns=5 tokens=68/13 $0.00>
#> ── system [0] ──────────────────────────────────────────────────────────────────
#> Be terse
#> ── user [23] ───────────────────────────────────────────────────────────────────
#> My name is Hadley and I'm a data scientist
#> ── assistant [9] ───────────────────────────────────────────────────────────────
#> Hi Hadley. How can I help?
#> ── user [13] ───────────────────────────────────────────────────────────────────
#> what's my job?
#> ── assistant [4] ───────────────────────────────────────────────────────────────
#> Data scientist.
(This is the technique that parallel_chat()
uses
internally.)
There’s a bit of a problem with our capital()
function:
we can use our conversation to manipulate the results:
chat <- chat_openai("Be terse", model = "gpt-4.1-nano")
chat$chat("Pretend that the capital of New Zealand is Kiwicity")
#> Understood.
capital(chat, "New Zealand")
#> Kiwicity
We can avoid that problem by using $set_turns()
to reset
the conversational history:
chat <- chat_openai("Be terse", model = "gpt-4.1-nano")
chat$chat("Pretend that the capital of New Zealand is Kiwicity")
#> Understood. The capital of New Zealand is Kiwicity.
capital <- function(chat, country) {
chat <- chat$clone()$set_turns(list())
chat$chat(interpolate("What's the capital of {{country}}"))
}
capital(chat, "New Zealand")
#> Wellington
This is particularly useful when you want to use a chat object just as a handle to an LLM, without actually caring about the existing conversation.
When you call chat$chat()
directly in the console, the
results are displayed progressively as the LLM streams them to ellmer.
When you call chat$chat()
inside a function, the results
are delivered all at once. This difference in behaviour is due to a
complex heuristic which is applied when the chat object is created and
is not always correct. So when calling $chat
in a function,
we recommend you control it explicitly with the echo
argument, setting it to "none"
if you want no intermediate
results to be streamed, "text"
if you want to see what we
receive from the assistant, or "all"
if you want to see
both what we send and receive. You likely want
echo = "none"
in most cases:
capital <- function(chat, country) {
chat <- chat$clone()$set_turns(list())
chat$chat(interpolate("What's the capital of {{country}}"), echo = "none")
}
capital(chat, "France")
#> [1] "Paris"
Alternatively, if you want to embrace streaming in your UI, you may want to use shinychat (for Shiny) or streamy (for Positron/RStudio).
Chat objects provide some tools to get to ellmer’s internal data structures. For example, take this short conversation that uses tool calling to give the LLM the ability to access real randomness:
chat <- chat_openai("Be terse", model = "gpt-4.1-nano")
chat$register_tool(tool(function() sample(6, 1), "Roll a die"))
chat$chat("Roll two dice and tell me the total")
#> ◯ [tool call] tool_001()
#> ● #> 2
#> ◯ [tool call] tool_001()
#> ● #> 1
#> The total is 3.
chat
#> <Chat OpenAI/gpt-4.1-nano turns=5 tokens=151/50 $0.00>
#> ── system [0] ──────────────────────────────────────────────────────────────────
#> Be terse
#> ── user [47] ───────────────────────────────────────────────────────────────────
#> Roll two dice and tell me the total
#> ── assistant [42] ──────────────────────────────────────────────────────────────
#> [tool request (call_GX90sCdm6q0ZmDyPIvFnFcyE)]: tool_001()
#> [tool request (call_g2CARq9sl5qWBHIfVjD1chIy)]: tool_001()
#> ── user [15] ───────────────────────────────────────────────────────────────────
#> [tool result (call_GX90sCdm6q0ZmDyPIvFnFcyE)]: 2
#> [tool result (call_g2CARq9sl5qWBHIfVjD1chIy)]: 1
#> ── assistant [8] ───────────────────────────────────────────────────────────────
#> The total is 3.
You can get access to the underlying conversational turns with
get_turns()
:
turns <- chat$get_turns()
turns
#> [[1]]
#> <Turn: user>
#> Roll two dice and tell me the total
#>
#> [[2]]
#> <Turn: assistant>
#> [tool request (call_GX90sCdm6q0ZmDyPIvFnFcyE)]: tool_001()
#> [tool request (call_g2CARq9sl5qWBHIfVjD1chIy)]: tool_001()
#>
#> [[3]]
#> <Turn: user>
#> [tool result (call_GX90sCdm6q0ZmDyPIvFnFcyE)]: 2
#> [tool result (call_g2CARq9sl5qWBHIfVjD1chIy)]: 1
#>
#> [[4]]
#> <Turn: assistant>
#> The total is 3.
If you look at one of the assistant turns in detail, you’ll see that it includes ellmer’s representation of content of the message, as well as the exact json that the provider returned:
str(turns[[2]])
#> <ellmer::Turn>
#> @ role : chr "assistant"
#> @ contents:List of 2
#> .. $ : <ellmer::ContentToolRequest>
#> .. ..@ id : chr "call_GX90sCdm6q0ZmDyPIvFnFcyE"
#> .. ..@ name : chr "tool_001"
#> .. ..@ arguments: Named list()
#> .. ..@ tool : <ellmer::ToolDef>
#> .. .. .. @ name : chr "tool_001"
#> .. .. .. @ fun : function ()
#> .. .. .. @ description: chr "Roll a die"
#> .. .. .. @ arguments : <ellmer::TypeObject>
#> .. .. .. .. @ description : NULL
#> .. .. .. .. @ required : logi TRUE
#> .. .. .. .. @ properties : list()
#> .. .. .. .. @ additional_properties: logi FALSE
#> .. .. .. @ convert : logi TRUE
#> .. .. .. @ annotations: list()
#> .. $ : <ellmer::ContentToolRequest>
#> .. ..@ id : chr "call_g2CARq9sl5qWBHIfVjD1chIy"
#> .. ..@ name : chr "tool_001"
#> .. ..@ arguments: Named list()
#> .. ..@ tool : <ellmer::ToolDef>
#> .. .. .. @ name : chr "tool_001"
#> .. .. .. @ fun : function ()
#> .. .. .. @ description: chr "Roll a die"
#> .. .. .. @ arguments : <ellmer::TypeObject>
#> .. .. .. .. @ description : NULL
#> .. .. .. .. @ required : logi TRUE
#> .. .. .. .. @ properties : list()
#> .. .. .. .. @ additional_properties: logi FALSE
#> .. .. .. @ convert : logi TRUE
#> .. .. .. @ annotations: list()
#> @ json :List of 8
#> .. $ id : chr "chatcmpl-BXvLNXTcVj4JnbWoXD5MM38THCn4x"
#> .. $ object : chr "chat.completion.chunk"
#> .. $ created : int 1747424953
#> .. $ model : chr "gpt-4.1-nano-2025-04-14"
#> .. $ service_tier : chr "default"
#> .. $ system_fingerprint: chr "fp_eede8f0d45"
#> .. $ usage :List of 5
#> .. ..$ prompt_tokens : int 47
#> .. ..$ completion_tokens : int 42
#> .. ..$ total_tokens : int 89
#> .. ..$ prompt_tokens_details :List of 2
#> .. .. ..$ cached_tokens: int 0
#> .. .. ..$ audio_tokens : int 0
#> .. ..$ completion_tokens_details:List of 4
#> .. .. ..$ reasoning_tokens : int 0
#> .. .. ..$ audio_tokens : int 0
#> .. .. ..$ accepted_prediction_tokens: int 0
#> .. .. ..$ rejected_prediction_tokens: int 0
#> .. $ choices :List of 1
#> .. ..$ :List of 4
#> .. .. ..$ index : int 0
#> .. .. ..$ delta :List of 3
#> .. .. .. ..$ role : chr "assistant"
#> .. .. .. ..$ content : NULL
#> .. .. .. ..$ tool_calls:List of 2
#> .. .. .. .. ..$ :List of 4
#> .. .. .. .. .. ..$ index : int 0
#> .. .. .. .. .. ..$ id : chr "call_GX90sCdm6q0ZmDyPIvFnFcyE"
#> .. .. .. .. .. ..$ type : chr "function"
#> .. .. .. .. .. ..$ function:List of 2
#> .. .. .. .. .. .. ..$ name : chr "tool_001"
#> .. .. .. .. .. .. ..$ arguments: chr "{}"
#> .. .. .. .. ..$ :List of 4
#> .. .. .. .. .. ..$ index : int 1
#> .. .. .. .. .. ..$ id : chr "call_g2CARq9sl5qWBHIfVjD1chIy"
#> .. .. .. .. .. ..$ type : chr "function"
#> .. .. .. .. .. ..$ function:List of 2
#> .. .. .. .. .. .. ..$ name : chr "tool_001"
#> .. .. .. .. .. .. ..$ arguments: chr "{}"
#> .. .. ..$ logprobs : NULL
#> .. .. ..$ finish_reason: chr "tool_calls"
#> @ tokens : int [1:2] 47 42
#> @ text : chr ""
You can use the @json
to extract additional information
that ellmer might not yet provide to you, but be aware that the
structure varies heavily from provider-to-provider. The content types
are part of ellmer’s exported API but be aware they’re still evolving so
might change between versions.