Prompt Engineering: Part 2 – Best Practices for Software Developers in Digital Industries
Prompt engineering can significantly streamline software development processes, improving code comprehension, documentation, code review, code writing, and test case creation. However, effective use of Large Language Models (LLMs) like Microsoft Copilot and Code Sentinel hinges on clear, structured, and responsible prompts that follow regulatory and ethical constraints. Below are detailed steps, explanations, and practical use cases to help software developers master these skills. For a general guide and best practices on Prompt Engineering, the reader is encouraged to review Part 1. (Note: While newer LLMs may handle vague prompts better than earlier models, precision and explicit constraints remain critical for reducing hallucinations and ensuring secure, compliant outputs.)
Key Components of Effective Prompts for Developers
When interacting with AI assistants, developer prompts typically should have four key components:
- Persona: Define clearly the role you want the AI to assume (e.g., “senior back-end engineer”).
- Context: Provide necessary background information (e.g., current project details, specific tech stack, or business context).
- Task: Clearly state what you want the AI to accomplish (e.g., generate tests, document methods, or review code).
- Format: Specify the desired output format (e.g., code snippets, bullet points, structured tables, and/or inline comments).
Before looking at a few examples, it is important to note that prompts can be multimodal, meaning that they could include text, images, videos, and/or audio inputs. In this document, to keep it short, we will focus on text-only prompts. A Part 3 with multimodal prompts could be offered in the future if feedback from developers shows interest in this topic.
Examples for Improving Developer Productivity by Prompt Engineering
1. Code Comprehension
Developers often encounter unfamiliar legacy code or third-party libraries. Effective prompts can identify those libraries and enhance quick comprehension.
Example Prompt:
Persona: You are an experienced software engineer specialized in embedded systems and real-time interrupt handling.
Context: I am analyzing the following C++ snippet used in an embedded controller system. [Include snippet here]
Task: Analyze and explain step-by-step the interrupt-handling logic; next, perform a what-if-scenario such as ‘what if multiple-interrupt occur in close succession;’ finally, consider possible common ISR pitfalls (nesting, masking, priority levels) if relevant.
Format: Use bullet points for clarity.
Reasoning for Effectiveness:
- Clearly setting the Persona ensures the AI’s response aligns with the needed technical depth.
- Explicit Context narrows the AI’s focus on relevant code logic.
- A specific Task generates targeted insights without extraneous information.
- Formatting in bullet points and chain-of-thought reasoning enhances readability and rapid comprehension.
2. Code Documentation
Effective prompts for documentation improve readability and long-term maintainability of software.
Example Prompt:
Persona: You are a technical writer skilled at documenting Python APIs clearly and succinctly.
Context: Document the following Python function that calculates thermal expansion for a given material. [Include function here]
Task: Write clear, concise, and detailed docstrings, describing parameters, return values, and any exceptions raised.
Format: Provide the answer in Google-style Python docstrings.
Reasoning for Effectiveness:
- Using a technical-writer persona ensures concise, professional documentation.
- Clearly stating the documentation style (Google-style) ensures consistency across the codebase.
- Including specifics about parameters, returns, and exceptions promotes thorough documentation practices.
3. Code Review & Refactoring
AI can help developers quickly identify issues and suggest improvements, thus enhancing code quality and consistency.
Example Prompt:
Persona: You are a seasoned software architect with extensive experience in microservices architecture.
Context: Review the provided Java Spring Boot microservice class, responsible for user authentication. [Include class]
Task: Identify potential security risks, performance bottlenecks, or code smells. Suggest practical improvements or refactoring strategies and explain your reasoning step-by-step.
Format: Provide your response as a structured table with the following columns: Issue, Severity, Explanation, and Suggested Improvement.
Reasoning for Effectiveness:
- Establishing the reviewer’s seniority helps ensure insightful feedback.
- A clearly stated security and performance review scope ensures actionable suggestions.
- A structured table format makes the review easy to read and actionable.
4. Writing New Code
When using AI to generate new code, precision in prompts is essential for accurate outputs aligned with coding standards and styles.
Example Prompt:
Persona: You are a senior, proficient front-end developer with expertise in React, TypeScript and scalable component architecture in front-time application.
Context: I am developing a responsive, mobile-first component to display real-time sensor data. The data would be live-streaming an API that calls a JSON file. The component needs to be reused and working under real-time.
Task: Create a reusable React component using TypeScript that fetches data from an API endpoint every 10 seconds, implements clean-up logic to prevent memory leakage, handles loading, error and stale data scenarios, accepts API endpoint and refresh capabilities for flexibility and can be integrated into a larger model (Bonus: Make the system exportable and unit testable for testing edge cases).
Format: Provide full component code along with brief inline comments explaining key logic. List all the open-source software that is referenced. Optionally, suggest unit-testing stub through react-testing-library.
Reasoning for Effectiveness:
- Specifying the front-end focus ensures optimal code suited to UI development.
- Mentioning precise requirements (real-time, reusable, responsive) guides the AI to generate highly practical solutions.
- Including inline comments promotes immediate readability and maintainability.
5. Test Case Generation
AI can efficiently create comprehensive and effective test cases, improving software robustness and reliability. Also, consider prompting the LLM to generate negative test cases and edge cases to improve robustness.
Example Prompt:
Persona: You are an experienced QA automation engineer familiar with Jest and React Testing Library.
Context: I have a React hook (useSensorData) that manages sensor states and retrieves data periodically. [Include hook details]
Task: Generate unit test cases to thoroughly test the hook, focusing on proper state handling, error catching, and data retrieval. Include negative tests for improved robustness.
Format: Jest tests with descriptive test and expect statements.
Reasoning for Effectiveness:
- Assigning the persona of a QA automation engineer produces comprehensive, effective test cases.
- Explicit test focus (state, error handling, data fetching) ensures coverage of critical functionalities.
- Specifying Jest clarifies tool preferences and output expectations, ensuring immediate usability.
An Example of Ineffective vs. Effective Prompt Specific to Software Development
Here is a concise example tailored specifically to the software development process in Digital Industries, showing an ineffective prompt and its more effective counterpart for ChatGPT-4o.
Ineffective Prompt to ChatGPT -4o:
Help me write better tests.
ChatGPT-4o Response:
Writing better tests generally involves good test coverage, clearly defined requirements, and thorough documentation.
Effective Prompt to ChatGPT -4o:
Suggest three best practices for writing automated unit tests in embedded software development for IoT sensors in Digital Industries. Briefly explain the benefit of each.
ChatGPT-4o Response:
- Mock External Interfaces: Allows testing of software logic without hardware dependency, speeding up development cycles.
- Boundary and Edge-Case Testing: Improves reliability by ensuring software handles unexpected sensor data gracefully.
- Continuous Integration (CI) Integration: Automates early detection of regressions, increasing software quality and reducing deployment risk.
As you can see, the benefits of this improved prompt are now tied explicitly to software testing processes, ensuring practical value and applicability to developers. The prompt specifies the software type (embedded software), use case (IoT sensors), and the industry context (Digital Industries); the explicit request for three best practices yields a clear, structured, actionable answer and guides the LLM on the desired format/length of the response. Note that I wanted a short response for the example to place in this document. In your case, you could get a more detailed response if you provide a role (e.g., quality assurance professional) and ask the LLM to frame its response around structured thinking, explain its reasoning step-by-step, and/or use few-shot prompting as explained in Part 1 of this document.
Regulatory Guardrails for Developers
In addition to obtaining the most accurate responses and reducing hallucinations, effective prompt engineering should include assessing privacy implications, examining potential biases, maintaining data protection standards, and upholding principles of fairness and transparency. In that vein, software developers should follow certain company-defined guardrails within their prompts to ensure that what is created can be protected, avoids potential legal risks, safeguards the company’s reputation, and does not do any harm to stakeholders. Here are a few regulatory-risk examples associated with the use of prompt engineering in AI-tools that developers need to be aware of:
- The use of AI-generated source code can create legal risks if the AI tool copied that code from a third party: Do not provide the AI tool with input prompts that would suggest the generation of third-party code as output.
- There is a risk of losing copyright protection in your software if you blindly use prompt engineering to generate code without any guardrails: Use all standard code-checking processes to check for the inclusion of open-source software.
- Avoid using prompts that would result in generation of large blocks of code for any application where copyright protection is desired: Where possible, modify AI-generated code to add human-produced original content.
- Do not prompt an LLM model to automatically translate code into a different programming language: It may be difficult to obtain copyright protection for any translated code that is generated by an LLM.
- Use the human-in-the-loop concept for all safety-critical applications: Where AI-generated code is used in safety-critical systems, ensure that review-by-human and validation-by-human occur before deployment.
Common Pitfalls for Developers and How to Avoid Them
A common pitfall in the use of LLMs to generate code is the risk of small inaccuracies (e.g., missing a minus sign) and hallucinations (i.e., the generated outputs are nonsensical or contain entirely fabricated code, despite appearing plausible). The best way to mitigate these issues is to carefully review and test the generated code. Here are a few other pitfalls along with suggested mitigation strategies:
- Vague Prompts: Avoid general requests like “Improve this code.” Instead, precisely state what’s needed: performance optimization, readability, bug fixes, etc.
- Overloaded Tasks: Don’t combine multiple complex tasks in one prompt (e.g., refactoring and documenting). Split into separate requests for clarity and accuracy.
- Security Flaws and Missing Context: AI-generated code can inadvertently introduce serious security flaws. Check for subtle bug that could be exploited. Also, always provide essential contextual information (frameworks, code examples, project-specific constraints) to avoid irrelevant or generic responses.
- Ignoring Code Standards: Specify coding standards or constraints (e.g., Microsoft, Siemens) to maintain codebase consistency and compliance.
- Underdefined Prompts: Prompts such as “write a sorting function” can yield unfruitful results as there are numerous sorting functions that can be applied to different cases (e.g., Bubble-sort, Merge sort, Heap Sort, Quick Sort).
- Ignoring Code Dependencies/Version Control: LLM might return code using outdated libraries and deprecated APIs. When prompting for code that interacts with third-party APIs, explicitly state the target API version and check generated code against the vendor’s current documentation.
- Not defining I/O constraints or API Version: LLM might assume initial format types or error handling while asking for new code. Specify possible input data types, I/O behavior and expected exceptions.
Recommended Resources for Further Mastery:
- Microsoft Prompt Engineering: https://microsoft.github.io/prompt-engineering/
- Microsoft Copilot Official Documentation: https://docs.microsoft.com/en-us/copilot/
- OpenAI Cookbook for Developers: https://github.com/openai/openai-cookbook
- ChatGPT Prompt Engineering for Developers: https://learn.deeplearning.ai/chatgpt-prompt-eng/
- Anthropic Claude Prompting Engineering: https://github.com/anthropics/prompt-eng-interactive-tutorial
- Reflexive Prompt Engineering: https://arxiv.org/abs/2504.16204
- Auto code development system: https://github.com/Significant-Gravitas/AutoGPT
- Developing workbench for AI prompting: https://www.promptlayer.com/
Summary:
The following summary can be used as a quick reference for what was discussed in this article:
- Clearly define Persona, Context, Task, and Format in your prompts.
- Use specialized personas matching your technical needs (QA engineer, technical writer, etc.).
- Provide explicit and complete context and constraints to get precise, compliant responses.
- Break down complex tasks into smaller, manageable prompts.
- Enforce consistency through structured formatting and clear expectations.
Mastering these techniques, and avoiding the corresponding technical and regulatory pitfalls, empowers software developers to significantly enhance productivity, reduce cycle times, and elevate code quality and compliance by effectively leveraging AI tools like Microsoft Copilot in everyday software development tasks. To facilitate continuous improvement, I am asking the reader to communicate directly with me if he or she finds a more effective alternative to the provided examples. (Note: This document was prepared with the help of AI.)


