logoMiserable Millennial
About

Posts

Tags
Photo by ThisIsEngineering from Pexels

How to Boost Your Software Development Workflow with ChatGPT-4: 4 Practical Use Cases

As a software engineer, efficiency and speed are of the essence. In my journey to optimize my workflow, I've found ChatGPT-4 to be an invaluable tool. In this blog post, I'll share some of the ways I have incorporated ChatGPT into my day to day role, and the prompts I use with examples.

Disclaimer

  • Version: For the purposes of this post, I am using ChatGPT-4 and not its previous iterations.
  • Licensing & Work Rules: Always consider licensing and work-related rules before posting or sharing any source code. Make sure you are allowed to use, modify, and share the material.

Generating Pull Request Descriptions From Git Diffs

Writing PR descriptions is boring and repetitve... most of the time. PRs always feel self explanatory and it's very easy to get lazy. It's also very annoying to deal with your own lazyness months or years later. You're trying to understand the history and cursing yourself for not writing more. Well it's now significantly easier to remove some of this work (in most cases by leveraging an LLM like ChatGPT.).

How I Do This

I typically follow this workflow:

  1. git diff > diff.txt
  2. Copy the parts I want (You quickly learn GPT likes to waffle on about extremely trivial changes)
  3. Write the prompt

An Example

Generate me a PR description based on the following diff:

The diff was taken from this pytest pull request

And ChatGPT produced this response:

Summary

This PR includes substantial refactoring and type hinting improvements in the pytest's doctest and fixtures modules. The changes are primarily focused on simplifying code, improving type hinting, and removing duplicated or unnecessary logic. Files Changed

src/_pytest/doctest.py: 24 additions & 38 deletions
src/_pytest/fixtures.py: 36 additions & 34 deletions
src/_pytest/python.py: 2 additions & 3 deletions
testing/python/fixtures.py: 8 additions

Key Changes Doctest Improvements

Simplified the setup() method in DoctestItem class.
Explicit type hinting is now available for various function parameters.
Removed the _setup_fixtures method as it was redundant after the refactoring.

Fixture Enhancements

Improved type hinting for function parameters.
The function getfixtureinfo() has been optimized to handle None values for function parameters.
Changed the data type for parametrize_argnames from List[str] to Set[str] for better performance.
Added deduplicate_names() function to remove duplicate names while preserving order.

Miscellaneous

Updated some parts of the code to be more Pythonic and straightforward.

Testing

Tests were added to confirm the functionality of deduplicating names in fixtures. Backwards Compatibility

The changes are mostly internal and shouldn't affect the public API, maintaining backward compatibility. Additional Notes

This PR is a part of an ongoing effort to improve pytest's code quality and maintainability.

Please review and provide any feedback for further improvements.

This produces a much richer description than I would for such a small PR and it took almost no effort. You can clearly see that there are some parts you wouldn't want to include such as listing the files changed, possibly info on tests. It can also sometimes give its own ideas on future work, obviously you will ignore that most of the time. Another gotcha to look out for is for very small PRs. GPT loves to pad out PRs with crap, such as "Added import for re module" You will either need to cut these out or adjust your prompt.

A free bonus tip that works half the time is to ask for the response in a copy-pastable markdown format. Ideal for GitHub, GitLab, etc.

Turning Classes Into Fixtures and Mock Data

I used to love testing, but over time the magic wore off and it became inane and boring, just like every other task you complete over and over and over... One of the worst parts for me is creating fixtures for integration, system and end to end tests. While there are some nice libraries that allow you to quickly write code to auto generate dummy data (such as faker), sometimes you just need something simple and you need it now.

How I do this

Very simple workflow, give chatgpt a class or model definition, and ask for your desired format back.

Another Example

The prompt:

I want you to generate me 4 json objects that I can use in a json file, made up of dummy data based on the following class definition:

@dataclass
class Article:
title: str
id: UUID
keywords: List[str]
word_count: int

And the response we we get:

[
{
"title": "Exploring the Deep Ocean",
"id": "58ebd71f-a440-4f89-8df3-7692e38804de",
"keywords": ["ocean", "exploration", "marine biology"],
"word_count": 1500
},
{
"title": "The History of Coffee",
"id": "96178b90-8baf-47d7-9882-164690c9de6a",
"keywords": ["coffee", "history", "beverage"],
"word_count": 1800
},
{
"title": "Understanding Quantum Mechanics",
"id": "d2d33cfc-2d8c-4a2d-a8a7-19e04a3d8b15",
"keywords": ["quantum mechanics", "physics", "science"],
"word_count": 2100
},
{
"title": "The Impact of Climate Change",
"id": "5ce40f15-5a92-41c3-935e-e9293d2f580a",
"keywords": ["climate change", "environment", "global warming"],
"word_count": 2300
}
]

This is rapid in comparison to the pre-LLM days of writing something quick and dirty. The data is coherent, easy to change, and it understood what a UUID was. We could generate more items or less depending on our needs.

Generating Simple Code and Tests

While some people will tell you that dev work is dead and we can all be replaced with AI, I am not so sure. I have seen some terrible output from GPT4 when you ask for something that hasn't been solved before, or something that requires real thought. Simple code, complete with tests? It's excellent at that with guidance.

Example

First we give a very perscriptive prompt:

I need to write a type hinted python function that takes a str as it's argument. The function will then find the substring of this form event_<year>_<number> where number is an int and so is year. It will then return the substring, but if no match is found it will raise an IngestError detailing the input and how no match was found. Generate this.

And we get back a nice response to take snippets from.

import re
from typing import Union
class IngestError(Exception):
pass
def find_event_substring(s: str) -> str:
pattern = r'event_\d{4}_\d+'
match = re.search(pattern, s)
if match:
return match.group(0)
else:
raise IngestError(f"Input '{s}' does not contain a valid 'event_<year>_<number>' substring.")
# Test the function
try:
result = find_event_substring("Here is a event_2021_45 in the string.")
print("Match found:", result)
except IngestError as e:
print(e)
try:
result = find_event_substring("No event substring here.")
print("Match found:", result)
except IngestError as e:
print(e)

Obviously we don't want the crap test at the bottom, we handle our own imports, and define our own exceptions, so we are likely to just grab the function, and do some renaming.

In reality I would also ask chatGPT to regenerate this with a docstring, but as this is only an example, I won't today.

Now we want some simple unit tests, so we ask for them.

generate me some pytest unit tests for the above function

And we get them back. Here's the response, minus the waffle about installing pytest and creating test files:

import pytest
from my_module import find_event_substring, IngestError
# Test with valid event string
def test_valid_event_string():
input_str = "Here is a event_2021_45 in the string."
assert find_event_substring(input_str) == "event_2021_45"
# Test with multiple valid event strings
def test_multiple_valid_event_strings():
input_str = "Here are multiple event_2021_45 event_2022_33."
assert find_event_substring(input_str) == "event_2021_45"
# Test with invalid year
def test_invalid_year():
input_str = "Invalid year event_21_45."
with pytest.raises(IngestError) as e_info:
find_event_substring(input_str)
assert str(e_info.value) == "Input 'Invalid year event_21_45.' does not contain a valid 'event_<year>_<number>' substring."
# Test with missing number
def test_missing_number():
input_str = "Missing number event_2021_."
with pytest.raises(IngestError) as e_info:
find_event_substring(input_str)
assert str(e_info.value) == "Input 'Missing number event_2021_.' does not contain a valid 'event_<year>_<number>' substring."
# Test with no event string
def test_no_event_string():
input_str = "No event string here."
with pytest.raises(IngestError) as e_info:
find_event_substring(input_str)
assert str(e_info.value) == "Input 'No event string here.' does not contain a valid 'event_<year>_<number>' substring."

Again this is a decent response. I think the key takeaway from this section is when you want ChatGPT to write code, you need to be as specific as possible in a lot of cases. For example we could easily improve the test formats, get docstrings automatically the first time, stick to naming conventions, if we had simply given an example and asked for the style to stay consistent with it.

Giving Insightful Code Review to Junior Developers

One part of the job that some people love and some people loathe is helping the more junior developers. There is a trade-off between giving them lots of help and detail, and getting on with your own workload. Luckily you can offload a lot of initial help and review to GPT.

A Contrived Example

Consider you are reviewing some work by a junior and come across the following code:

class Inventory:
def check_stock(self, item_id):
print(f"Checking stock for item {item_id}")
return True
class Payment:
def process_payment(self, user_id, amount):
print(f"Processed payment of {amount} for user {user_id}")
class Shipping:
def ship_item(self, item_id, address):
print(f"Shipping item {item_id} to {address}")
# Client code
inventory = Inventory()
payment = Payment()
shipping = Shipping()
if inventory.check_stock("ABC123"):
payment.process_payment("user1", 100)
shipping.ship_item("ABC123", "123 Main St")

Obviously not in the same place, but this is a contrived example after all. As an experienced developer you see the problem, but are currently too busy to give an in depth review, and it might not be as effective to simply request changes with a 1 sentence comment "Look at facade pattern".

Once again here comes ChatGPT to the resuce.

The prompt:

A junior dev has written the below code, I am reviewing it and think it could benefit from facade pattern, generate me a review comment that gives a simple explanation of the facade pattern, and how it could be used to simplify the below code with a place_order function or method as part of a larger facade:

And we get back: The code you've written certainly works, but it requires the client code to interact with multiple subsystems (Inventory, Payment, Shipping) directly to complete a single operation: placing an order. This can make the code harder to manage and extend in the future.

Consider using the Facade Pattern to simplify this. The Facade Pattern provides a unified interface to a set of interfaces in a subsystem, making it easier to interact with the subsystem as a whole. Essentially, it wraps several complex subsystems behind a single, simplified API. This makes it easier to maintain, easier to read, and reduces the dependencies between the subsystems and the client code.

For instance, you could create a OrderFacade class that includes a place_order function, like this:

class OrderFacade:
def __init__(self):
self.inventory = Inventory()
self.payment = Payment()
self.shipping = Shipping()
def place_order(self, item_id, user_id, amount, address):
if self.inventory.check_stock(item_id):
self.payment.process_payment(user_id, amount)
self.shipping.ship_item(item_id, address)

Then, your client code could be simplified to:

order_facade = OrderFacade()
order_facade.place_order("ABC123", "user1", 100, "123 Main St")

By doing this, you encapsulate the complexities and dependencies in a single class, making it easier to manage.

Then we can skim the response, make sure it's correct, edit it slightly and post it. Obviously we don't want to secretly bypass ourselves when we inevitably get follow up questions, but by telling ChatGPT what we want explained, the context of the review, and the solution we are aiming towards, we can help a lot more in a much shorter time.

Concluding Thoughts

Lots of people like to talk about how good GPT is at coding, overestimating its abilities, and although we have shown it can be very effective with strict guidance, I believe the greatest strengths of ChatGPT lie in every other aspect of software engineering.

There are many more ways in which I can, and do, leverage the power of ChatGPT, such as turning User Stories into Gherkin Syntax Scenarios, interpreting test failure assertion errors, Pretty printing json that has errors, but this post would never end. However, if you feel you have some truly novel uses please email me (link below).

If you enjoy this content, consider supporting me below:
Buy me a coffee

Buy me a coffee

If you wish to contact me you can email me at sham@miserablemillennial.com

Miserable Millennial | © 2023