top of page

Prompt Engineering in Practice

  • catalinapaun
  • 2 days ago
  • 3 min read

Author: Radu Bobe - Digital Identity Consultant


Continuing the Innovation Circle series, our forth session brought colleagues together around one of the most practical and immediately applicable skills in the AI ecosystem: Prompt Engineering


From "Please help" to production-ready output 

The goal was to demonstrate that the difference between a mediocre output and a production-ready one lies, most of the time, in the quality of the prompt. 

The first myth to fall was that prompt engineering is some kind of native talent or creative magic. It's a learnable, structured skill. An AI model doesn't guess intention; it amplifies input. Poor input, poor output, regardless of how advanced the model is.  


The RCCF Framework: structure that makes the difference 

At the core of the session was a  framework for building effective prompts: RCCF — Role, Context, Command, Format. 

Each component has a precise purpose. Role defines who the assistant is in that context ("Act as a senior backend engineer who has seen every bad pattern"). Context provides the real situation: tech stack, constraints, traffic volume. Command specifies exactly what needs to be done, step by step. Format describes what the output should look like. 

In order to better illustrate the framework, Radu Bobe presented a before-and-after comparison which made this concrete: the same CSV processing and email task went from 40 lines of generic code with no error handling, to a production-ready solution on the first iteration — saving 2-4 hours of refactoring in exchange for 90 seconds spent writing a better prompt. 


Eight technical principles, demonstrated in practice 

The session continued with eight principles that transform the way we interact with AI on a daily basis. 

The first — and most fundamental — is clarity and direction. Claude doesn't guess. Numbered steps, explicit constraints and business context dramatically reduce hallucination

and increase output precision. 

The second principle, WHY before WHAT, generated significant interest in the room. Explaining the reason behind an instruction turns a command into understanding. Claude can handle edge cases correctly, prioritize in conflicting situations and make contextual decisions — rather than blindly following rules. 

The remaining principles covered numbered steps for predictable output and easy debugging, few-shot examples where 3-5 concrete examples outperform any verbal explanation for format control, role setting in the system prompt for consistent tone and behavior across an entire session. Extended Thinking activated selectively for complex problems, agentic systems with structured state using git discipline and explicit break points, and finally long context with parallel tool calls — a technique that can deliver a 3-10x speedup compared to serial approaches. 


The anti-patterns that break everything 

A dedicated section on anti-patterns clarified what consistently doesn't work: excessive politeness, negative formulations instead of affirmative ones, the single mega-prompt trying to do everything at once, and vague instructions like "make it better." Each anti-pattern came paired with the correct alternative, giving colleagues something immediately actionable to take back to their desks. 


Skills in Claude Code: automation and consistency 

The second part of the session introduced Skills — a mechanism through which Claude Code can be "trained" once for a specific workflow, with the benefits carrying over to every subsequent session. A skill is a folder containing a mandatory SKILL.md file, executable scripts, and reference documentation. The Progressive Disclosure principle ensures that only relevant information is loaded into context at any given time, minimizing token usage and keeping responses focused. 

Alongside skills, the session presented CLAUDE.md; the project configuration file read automatically at every session, defining team conventions, technical context and actions that require explicit confirmation before execution. 


Clear takeaways, applicable from tomorrow 

The session closed with eight key conclusions, from RCCF and "show, don't tell" through examples, to parallel tool usage and combining CLAUDE.md with Skills for a complete team-level AI stack. 

Everyone works with AI daily, but few invest 90 seconds in writing a genuinely good prompt. This session made a compelling case that it's worth it. 


Stay tuned for the next Innovation Circle sessions — our agentic journey continues!

 

 
 
 

Comments


  • Facebook
  • Instagram
  • LinkedIn
bottom of page