Deep Research Yourself
Use Chat GPT's new Deep Research feature to Deep Research Yourself to get an honest view of what companies see when they think about hiring you. I did it and shared the results.
I'm an experienced full stack software engineer with a passion for UI.
I also love AI, especially making it do useful things.
import { ed } from 'England'
ed.improveWith('Natalia', 'Gandalf');
function EdSpencer() {
return (
<Engineer specialties={["Full Stack", "AI", "UX"]}>
<Languages
expert={["TypeScript", "JavaScript", "HTML", "CSS"]}
conversant={["Python", "C++"]}
/>
<Technologies
expert={["React", "Node", "Next.js", "Tailwind CSS"]}
conversant={["GraphQL", "PostgreSQL", "terraform", "Docker"]}
/>
<Experience
areas={["Cyber Security", "Frameworks"]}
management={true}
/>
<Embraces cicd={true} iac={true} />
</Engineer>
)
}
Stuff I've been working on lately
Use Chat GPT's new Deep Research feature to Deep Research Yourself to get an honest view of what companies see when they think about hiring you. I did it and shared the results.
Evals are to LLM calls what unit tests are to functions. They're absolutely essential for ensuring that your LLM prompts work the way you think they do. This article goes through a real-world example from the in-production open-source application bragdoc.ai, which extracts structured work achievement data from well-crafted LLM prompts. We'll go through how to design and build Evals that can accurately and efficiently test your LLM prompts.
In this article we do a deep dive into how I'm using mdx-prompt for extracting work achievements from git commit messages in the Bragdoc app. We go through the half a dozen different data formats we go through in order to extract well-structured data from a string prompt, and how we can test each part of the process in isolation.
LLMs use strings. React generates strings. We know React. Let's use React to render Prompts. This article introduces mdx-prompt, which is a library that lets you write your prompts in JSX, and then render them to a string. It's great for React applications and React developers.
In this article I break down how I built the core of bragdoc.ai in about 3 weeks over the Christmas/New Year break. The secret? AI tooling. We'll go into details on how I used Windsurf, the Vercel AI Chat Template, Tailwind UI, and a bunch of other tools to get a product off the ground quickly.
TypeScript, AI, React and NextJS
mdx-prompt lets you use JSX to write LLM prompts, giving you the familiarity of React and the power of composability, templating logic, testing and reuse that come from JSX & MDX.
Read Announcement Postbragdoc.ai is a SaaS application that uses a blend of AI and traditional SaaS technologies to help professionals keep track of all the great work they've done, and automatically generate high quality, evidence-based weekly summaries for your boss or performance review documentation to support your next promotion. It's also completely open source. I blog about it often.
Find out moreInformAI is a tool that allows AI to access and understand the information in your React components. With InformAI, it's easy to build AI copilots that can see the same screen as the user.
Read Announcement Postreact-auto-intl uses AI to automatically internationalize and translate your React and Next JS applications. It can reduce days of tedious work to minutes.
Find out moreNarratorAI excels at generating pieces of content like "What to Read Next" summaries, blog tag intros, and search result summaries. It's a tool that helps you create AI-powered content for your blog or other content.
Read Announcement PostReadNext is a tool that creates AI-powered content recommendations for your blog or other content. It uses a combination of natural language processing and machine learning to suggest related content.
Read Announcement PostA collection of examples for working with React Server Components. Promises, streaming, server actions, form processing, and AI integration. Uses ReadNext for related example recommendations.
View the ExamplesI've been writing JavaScript, HTML & CSS for 20 years. Along the way I've gotten good at UX, AI, IaC and CI/CD. My go-to's are TypeScript, React and Prisma, CICD'd to the cloud.
I've deployed large scale applications to AWS and Google Cloud, using terraform, kubernetes, and docker to create scalable architectures from the ground up using CI/CD and IaC.
I'm fluent with GitHub and GitLab CI/CD pipelines, and know how to get my code into production quickly and with high quality.
I've been programming with JavaScript since long before it was cool to do so. These days I'm a TypeScript fanboy, but I still love JavaScript. I'm also a big fan of React, Next.js, and Tailwind CSS.
I've also been written my fair share of Python, mostly using it for AI projects. I've used C++ for some embedded systems work, and hate Java. Well, dislike anyway.