Vibe Coding Forem

David Christian Liedle
David Christian Liedle

Posted on

I had Claude Code write a book on shell scripting in 2025

Opus 4.1 wrote this one markdown chapter at a time. Hopefully it helps anyone interested in Shell Scripting or working on the terminal with Claude Code, Gemini CLI, or Codex CLI.

https://github.com/cloudstreet-dev/Shell-Scripting-in-2025

Top comments (2)

Collapse
 
davidcanhelp profile image
David Christian Liedle

Note that Claude Code, Gemini CLI, and Codex CLI are not shell scripts; they're npm packages. I mention them in the book because learning shell helps you navigate the terminal, which is where those live and run. github.com/cloudstreet-dev/Shell-S...

Collapse
 
martial_terran_6ecf247360 profile image
Martial Terran

Can you make and link URL to a Youtube video showing what is meant by "shell scripting"? Do you mean DOS type Batch files? PS_AI scripts?

"All the platforms compress history, bastardising your message history into garbage." What about Google AIStudio? How many tokens before Google AIStudio compress message history? [I have noticed that after 100,000 tokens, the python scripts tend to have more typos and errors, so I manually delete obsolete (obsolete prior versions or prior debugging) prompts/responses.....]

Gemini (Chrome Tab) responds:
While Google AI Studio uses models with very large context windows, the specific point at which it compresses or manages message history is related to the model's token limit for a single conversation.

The key details regarding token limits and history management in Google AI Studio are:

Context Window Limit (Per Chat): The maximum conversation history is determined by the context window size of the specific Gemini model you are using. For example, the gemini-1.5-flash model has a massive context window of around 1 million tokens for input.

No Automatic Reset: The token count does not reset daily for the same chat. The token count for a conversation keeps accumulating until it hits the model's maximum limit.

Compression is a Strategy: Unlike some other platforms that may aggressively compress at a lower, fixed token count, Gemini models are designed to utilize the entire large context window. For developers using the Gemini API, there are methods and strategies available for explicitly compressing the history (e.g., summarizing old messages into a TLDR) when the context gets too long, but this is a choice made by the developer, not an automatic compression at a low threshold in the AI Studio chat interface.

Practical Performance: While the official limit is very high (e.g., 1 million tokens), some users of AI Studio have reported the interface becoming sluggish or the model forgetting context at much lower, though still very large, token counts (e.g., around 200,000 to 250,000 tokens). This suggests that while the model can hold a million tokens, practical performance may degrade before reaching that hard limit.

In short, Google AI Studio and its underlying models offer a large token limit (e.g., 1 million tokens for gemini-1.5-flash), but this limit is the conversation ceiling, not a trigger for early compression. You only hit a hard stop on continuing the chat when the total input tokens (including all previous history) approaches that limit.