Code review for a pull request prompt
Produces a constructive, prioritized review report from a diff.
Battle-tested prompts that speed up daily developer tasks: writing tests, refactoring and debugging.
Produces a constructive, prioritized review report from a diff.
Produces unit tests covering happy path, edge cases and error conditions for a function or class.
Refactors a messy code block without changing its behavior, improving naming and structure.
Identifies common vulnerabilities (OWASP Top 10 first) and proposes fixes.
Turns algorithmic pseudocode into idiomatic, testable code in a target language.
Maps failure points in a piece of code and proposes a clear resilience strategy.
Teaches an algorithm with visual intuition, a concrete example and a small reference implementation.
Adds tight, readable TypeScript types to untyped or loosely typed code without changing runtime behavior.
Drafts a README so a new developer or user can grasp the project in under 5 minutes.
Produces a small, secure multi-stage Dockerfile with a short usage guide.
Improves a slow SQL query with index, rewrite and schema suggestions.
Designs CRUD and workflow endpoints for a resource with validation and error codes.
Generates a regex for a described goal, explains it piece by piece and tests it against samples.
Suggests 2–3 patterns for a problem with pros/cons and justifies one.
Audits a component against WCAG 2.2 and makes it screen-reader friendly.
Produces a fast, cached and secure CI/CD config with concise usage notes.
Writes a Conventional Commits-compliant, concise commit message.
Balances the test pyramid, picks the right level per risk and outlines an automation roadmap.
Extracts a timeline from messy logs and produces root-cause hypotheses.
Plans a schema change in production in incremental, safe steps.
Analyzes slow or resource-heavy code, forms hypotheses and proposes measurable improvements.
Stress-tests a feature from three angles and surfaces easy-to-miss boundary cases.
Analyzes an error step by step, explains the root cause and suggests a minimal fix.
Copy the prompt, fill any placeholders with your context, then run it in your chosen AI model. Improve results by adding 1–2 constraints (audience, tone, output format).
Provide clear context. Specify the output format (bullets, table, JSON). Avoid sharing unnecessary sensitive information.
Not exactly. Models vary in style and quality. Try the same prompt across models and keep the one that works best for you.
No. The content is for general purposes only; consult a qualified professional for important decisions.