This content originally appeared on DEV Community and was authored by Spencer Tower
Recently, I've become interested in exploring the use of AI as a collaborator in the coding process. How can it expedite development, and what are the potential benefits and drawbacks of this approach?
After watching the following video:Building Hacker News with AI, describe a UI and AI builds it for us I decided to try implementing a similar approach, creating a Hacker News UI clone as a way to get more experience utilizing AI as an assistant.
The video uses Rendition, an AI assistant that translates text into Figma designs. I chose to create my clone from scratch, using Chat GPT (the unpaid plan based on Chat GPT 4, June 2023 version), Next js, React, Typescript, and Tailwinds CSS.
My goal was to experiment with how different prompts, varying in detail, yield different results. I'll focus on the Hacker News header to test these prompts.
Ideally, a prompt would yield an "80/20" outcome where the AI generates around 80 percent of the code, and I only need to do 20 percent of the editing to make it match the website exactly.
Each prompt was input in a separate chat session with the idea that responses would be generated independently of one another, without building on prior context.
The rest will be a simple account of my experience tinkering with prompts to get Chat GPT to generate the most accurate header component for the Hacker News front page.
Prompts and Results:
Initially, I considered reviewing the docs for prompt engineering suggestions. The docs suggest detailed prompts with clear delimiters between sections to leave as little as possible for the AI to interpret. (I also asked ChatGPT directly what kind of prompt would be most effective given the technologies I planned to use, just to see if its answer matched the docs. Unsurprisingly, ChatGPT's suggestion aligned closely with the prompt format laid out in the docs.)
While this clearly sounded like the most sensible approach, I wanted to see how vague I could be with prompts and still get a solid foundation for a component. Here are a few of the prompts I tested:
Prompt 1: Following the Video's Example
I decided to implement the exact same prompt used in the video to see the difference in how Chat GPT with React and Typescript differed from Rendition. I thought it would be interesting to see what the results were here given that this prompt is quite a bit different from the approach suggested in the OpenAI documentation. The prompt uses more abbreviated wording than what is suggested and is all in one paragraph:
"The component will be in a nextjs app using typescript and tailwinds css. Here is the description:
hacker news top bar, a single row of (left: logo, title, navlinks) and (right: login link). bold orange background. black text. links content: Hacker News new | past | comments | ask | show | jobs | submit"
Result:
This yielded decent results, but the image is broken and includes a label to its right. It did not pick up on the pipes as delimiters between the links as Rendition did in the video, but overall it is still quite accurate.
Prompt 2: Testing the Waters with Vague Instructions
I tried an exceptionally vague prompt to see what it would come up with on its own with minimal input:
"generate code for the hackernews header in nextjs, react, typescript, and tailwinds"
Result:
This was surprisingly accurate given no detail was provided aside from “hackernews” and “header”. However, it missed the text color, styling is off for the logo, and spacing between links is off. It also added a search bar. But this does show how little information Chat GPT can go off of and still get the foundations right.
Prompt 3: Generating Code from a Website Link
Providing only a link to the website:
"generate code for the header of this website: https://news.ycombinator.com/ use nextjs, react, typescript, and tailwinds css"
Result:
Again, mostly accurate but the image is broken, the spacing is a little off, and there is a search bar, but the text styling is right this time. But still off of only a link.
Prompt 4: Using a Screenshot as a Prompt
Uploading a screenshot with no further description of the component:
"generate code for the header of this website for nextjs, react, typescript, and tailwinds css"
Result:
I had the highest hopes for this one. I was hoping for an identical clone. Initially, I was disappointed with the result. The colors in the header are off, from the logo to the text color. And it still insisted on adding a search bar. However, on second thought, considering this is going off of a screenshot it is pretty impressive to have generated so much detail.
Prompt 5: Detailed Description for More Accurate Results
Based on the example provided by the docs and chat gpt, I decided to implement the following approach, which is more along the lines of that of the video but with a little more detail:
"I need to create the header component for a Hacker News clone using Next.js, TypeScript, and Tailwind CSS. I will only be creating the UI.
Visual Design:
- small ycombinator "y" logo - white border, with white "Y"
- title "Hacker News", bold black, on left side of header
- links: to right of title - new, past, comments, ask, show, jobs, submit
- orange background
- each link separated by a |
- 'login' link, right side of header "
Result:
Pretty accurate. With more detail in the initial prompt, it likely could have ironed out the styling for the logo. Minimal tweaking in tailwinds will fix the rest. More detail is clearly the way to go.
Reflection
I was surprised with how little information Chat GPT could go off of to generate a UI component of a given website. It was interesting to see the subtle variations in different prompt approaches. Clearly the best way to go is with more detail and leaving as little as possible left for the AI to have to interpret. Having clear delimiters and segments in the prompt not only makes it easier for the AI but would make it easier for other developers to read and use.
I think that continuing to shoot for an 80/20 return is adequate as it would likely become more time consuming to engineer a perfect prompt (which probably doesn't exist anyways) than to generate a solid foundation and intervene with a few code edits here and there.
I can see that leveraging AI as a collaborator can significantly expedite the process compared to writing components by hand, while giving the developer more time to focus on the overall architecture and implementation of a project.
So after a little prompt experimentation I've got the foundations of a Hacker News header and a few code edits to make.
A few questions to consider going forward:
- Did Chat GPT's accuracy have anything to do with Hacker News being a widely used site for clone projects?
- Would a similar quality of results be generated with a more obscure site?
- Could another AI be used effectively to make code edits?
This content originally appeared on DEV Community and was authored by Spencer Tower
Spencer Tower | Sciencx (2024-06-20T02:30:25+00:00) Exploring AI-Assisted UI Development: Lessons from Creating a Hacker News Clone. Retrieved from https://www.scien.cx/2024/06/20/exploring-ai-assisted-ui-development-lessons-from-creating-a-hacker-news-clone/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.