Introduction to LLMs
Understand how AI and Large Language Models (LLMs) power MyPip.
🧠 What is an LLM?
LLM stands for Large Language Model. MyPip uses Claude Sonnet 3.5 and 4 by Anthropic to generate app logic, code, and UI based on your natural language prompts. These models are similar to OpenAI’s ChatGPT and Google Gemini — designed to understand and generate human-like text, including programming code.
LLMs don’t "know" things like a human or a search engine — they’re essentially very smart autocomplete systems. Based on billions of text examples during training, they predict what comes next in a sentence or code snippet.
In MyPip, LLMs help convert your app idea into structured UI layouts, component code, and logic by recognizing patterns in app design and mobile architecture.
✅ Important: MyPip never uses your project data to train the AI models.
✍️ What’s a Prompt?
A prompt is the message or idea you send to the AI.
Examples of prompts in MyPip:
“I want an iOS app for mood tracking with a journal, emoji rating, and calendar view.”
“Build a mobile task manager with folders, reminders, and a dark mode.”
The clearer and more detailed your prompt, the better the AI’s output. Learn more in the Prompting Guide.
🧩 What is Context?
Context is all the information the AI currently knows while generating your app. This includes:
Your prompt
Prior responses
UI components already built
Any project memory shared with the AI
Each LLM has a context window — the maximum number of tokens (words, code, etc.) it can process at once. MyPip uses models with large context windows, but it's still possible for long sessions to cause the AI to forget earlier inputs.
You can summarize or reset context to keep performance optimal. See Resetting AI Context for instructions.
🔢 What Are Tokens?
LLMs process text in tokens—small chunks of words or code.
Examples of tokens:
“Hello world!” = 3 tokens (
Hello
,world
,!
)<Button title="Click me" />
= ~6 tokens
Tokens matter because:
Every prompt and response uses tokens
Token limits affect output length and memory
Pricing (if applicable) is often based on token usage
📊 Approximate Token Costs in MyPip
TaskEstimated Token UsageSimple screen (1-2 components)100–200 tokensMulti-screen layout500–800 tokensApp with auth & logic1,000–2,000+ tokensFull exportable mobile app5,000–8,000+ tokens
Token usage depends on your prompt size and how many components/screens are generated.
💸 Tokens & Cost Efficiency
If you're on a paid plan, MyPip may charge based on the number of tokens processed in each run.
To reduce token usage:
Use short, clear prompts
Avoid repetitive edits
Reset the AI’s context when switching ideas
Use summary mode instead of full regeneration
⚠️ Accuracy and Limitations
While LLMs are powerful, they do have limits:
Outdated training data: The model may not know the latest versions of mobile frameworks (e.g., React Native 0.74 if trained on 0.72).
Hallucinations: The AI might confidently generate incorrect logic, missing props, or broken imports.
Non-determinism: Even the same prompt can return different results each time.
🔍 Always preview and test your generated app before exporting or publishing.