Build a React Application for AI-Powered Image Generation Using OpenAI DALL-E API
In the dynamic landscape of technology, where innovation continually shapes the boundaries of what’s possible, artificial intelligence (AI) never ceases to captivate our imagination.
AI refers to the simulation of human intelligence processes by computer systems. These processes include tasks such as learning, reasoning, problem-solving, perception, language understanding, and decision-making.
Today, individuals and companies have developed and trained several AI models to perform certain tasks better than humans in real-time. Among the myriad applications of AI, one particularly intriguing area is AI-powered image generation.
What You’re Building
This guide explains how to build a React application that seamlessly integrates with the OpenAI DALL-E API via a Node.js backend and generates captivating images based on textual prompts.
Prerequisites
To follow along with this project, you should have:
- Fundamental understanding of HTML, CSS, and JavaScript
- Basic knowledge of React and Node.js
- Node.js and npm (Node Package Manager) or yarn installed on your computer
What’s OpenAI DALL-E API?
OpenAI API is a cloud-based platform that grants developers access to OpenAI’s pre-trained AI models, such as DALL-E and GPT-3 (we used this model to build a ChatGPT clone with the code in this Git repository). It allows developers to add AI features such as summarization, translation, image generation, and modification to their programs without developing and training their models.
To use OpenAI API, create an account using your Google account or email on the OpenAI website and obtain an API key. To generate an API key, click Personal at the top-right corner of the website, then select View API keys.
Click the Create new secret key button, and save the key somewhere. You will use it in this application to interact with the OpenAI’s DALL-E API.
Setting Up the Development Environment
You can create a React application from scratch and develop your own interface, or you can grab our Git starter template by following these steps:
- Visit this project’s GitHub repository.
- Select Use this template > Create a new repository to copy the starter code into a repository within your GitHub account (check the box to include all branches).
- Pull the repository to your local computer and switch to the starter-files branch using the command:
git checkout starter-files
.
- Install the necessary dependencies by running the command
npm install
.
Once the installation is complete, you can launch the project on your local computer with npm run start
. This makes the project available at http://localhost:3000/.
Understanding the Project Files
In this project, we’ve added all the necessary dependencies for your React application. Here’s an overview of what’s been installed:
- file-server: This utility library simplifies the process of downloading the generated images. It’s linked to the download button, ensuring a smooth user experience.
- uuid: This library assigns a unique identification to each image. This prevents any chance of images sharing the same default file name, maintaining order and clarity.
- react-icons: Integrated into the project, this library effortlessly incorporates icons, enhancing the visual appeal of your application.
At the core of your React application lies the src folder. This is where the essential JavaScript code for Webpack is housed. Let’s understand the files and folders in the src folder:
- assets: Within this directory, you’ll find the images and loader gif that are utilized throughout the project.
- data: This folder contains an index.js file that exports an array of 30 prompts. These prompts can be used to generate diverse and random images. Feel free to edit it.
- index.css: This is where the styles used in this project are stored.
Understanding the Utils Folder
Inside this folder, the index.js file defines two reusable functions. The first function randomizes the selection of prompts describing various images that can be generated.
import { randomPrompts } from '../data';
export const getRandomPrompt = () => {
const randomIndex = Math.floor(Math.random() * randomPrompts.length);
const randomPrompt = randomPrompts[randomIndex];
return randomPrompt;
}
The second function handles the download of the generated images by leveraging the file-saver dependency. Both functions are created to offer modularity and efficiency, and they can be conveniently imported into components when required.
import FileSaver from 'file-saver';
import { v4 as uuidv4 } from 'uuid';
export async function downloadImage(photo) {
const _id = uuidv4();
FileSaver.saveAs(photo, `download-${_id}.jpg`);
}
In the code above, the uuid dependency gives each generated image file a unique ID, so they don’t have the same file name.
Understanding the Components
These are small blocks of code separated to make your code easy to maintain and understand. For this project, three components were created: Header.jsx, Footer.jsx, and Form.jsx. The major component is the Form component, where the input is received and passed to the App.jsx file with the generateImage
function added as an onClick
event to the Generate Image button.
In the Form component, a state is created to store and update the prompt. Additionally, a feature lets you click on a random icon to generate the random prompts. This is possible by the handleRandomPrompt
function, which uses the getRandomPrompt
function you’ve already set up. When you click the icon, it fetches a random prompt and updates the state with it:
const handleRandomPrompt = () => {
const randomPrompt = getRandomPrompt();
setPrompt(randomPrompt)
}
Understanding the App.jsx File
This is where most of the code resides. All the components are brought together here. There’s also a designated area to display the generated image. If no image has been generated yet, a placeholder image (Preview image) is displayed.
Inside this file, two states are managed:
isGenerating
: This keeps track of whether an image is currently being generated. By default, it’s set to false.generatedImage
: This state stores information about the image that has been generated.
Additionally, the downloadImage
utility function is imported, allowing you to trigger the download of the generated image when you click the Download button:
<button
className="btn"
onClick={() => downloadImage(generatedImage.photo)}
>
Now that you understand the starter files and have set up your project. Let’s start handling the logic of this application.
Generating Images With OpenAI’s DALL-E API
To harness the capabilities of OpenAI’s DALL-E API, you’ll use Node.js to establish a server. Within this server, you’ll create a POST route. This route will be responsible for receiving the prompt text sent from your React application and then utilizing it to generate an image.
To get started, install the necessary dependencies in your project directory by running the following command:
npm i express cors openai
Additionally, install the following dependencies as dev dependencies. These tools will assist in setting up your Node.js server:
npm i -D dotenv nodemon
The installed dependencies are explained as follows:
- express: This library helps create a server in Node.js.
- cors: CORS facilitates secure communication between different domains.
- openai: This dependency grants you access to OpenAI’s DALL-E API.
- dotenv: dotenv assists in managing environment variables.
- nodemon: nodemon is a development tool that monitors changes in your files and automatically restarts the server.
Once the installations are successful, create a server.js file at the root of your project. This is where all your server code will be stored.
In the server.js file, import the libraries you just installed and instantiate them:
// Import the necessary libraries
const express = require('express');
const cors = require('cors');
require('dotenv').config();
const OpenAI = require('openai');
// Create an instance of the Express application
const app = express();
// Enable Cross-Origin Resource Sharing (CORS)
app.use(cors());
// Configure Express to parse JSON data and set a data limit
app.use(express.json({ limit: '50mb' }));
// Create an instance of the OpenAI class and provide your API key
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Define a function to start the server
const startServer = async () => {
app.listen(8080, () => console.log('Server started on port 8080'));
};
// Call the startServer function to begin listening on the specified port
startServer();
In the code above, you import the necessary libraries. Then, establish an instance of the Express application using const app = express();
. Afterward, enable CORS. Next, Express is configured to process incoming JSON data, specifying a data size limit of 50mb
.
Following this, an instance of the OpenAI class is created utilizing your OpenAI API key. Create a .env file in your project’s root and add your API key using the OPENAI_API_KEY
variable. Finally, you define an asynchronous startServer
function and call it to set the server in motion.
Now you have configured your server.js file. Let’s create a POST route that you can use in your React application to interact with this server:
app.post('/api', async (req, res) => {
try {
const { prompt } = req.body;
const response = await openai.images.generate({
prompt,
n: 1,
size: '1024x1024',
response_format: 'b64_json',
});
const image = response.data[0].b64_json;
res.status(200).json({ photo: image });
} catch (error) {
console.error(error);
}
});
In this code, the route is set to /api
, and it’s designed to handle incoming POST requests. Inside the route’s callback function, you receive the data sent from your React app using req.body
— specifically the prompt
value.
Subsequently, the OpenAI library’s images.generate
method is invoked. This method takes the provided prompt and generates an image in response. Parameters like n
determine the number of images to generate (here, just one), size
specifies the dimensions of the image, and response_format
indicates the format in which the response should be provided (b64_json
in this case).
After generating the image, you extract the image data from the response and store it in the image
variable. Then, you send a JSON response back to the React app with the generated image data, setting the HTTP status to 200
(indicating success) using res.status(200).json({ photo: image })
.
In case of any errors during this process, the code within the catch
block is executed, logging the error to the console for debugging.
Now the server is ready! Let’s specify the command that would be used to run our server in the package.json file scripts
object:
"scripts": {
"dev:frontend": "react-scripts start",
"dev:backend": "nodemon server.js",
"build": "react-scripts build",
},
Now when you run npm run dev:backend
, your server will start on http://localhost:8080/, while if you run npm run dev:frontend
, your React application will start on http://localhost:3000/. Ensure both are running in different terminals.
Make HTTP Requests From React to Node.js Server
In the App.jsx file, you will create a generateImage
function that is triggered when the Generate Image button is clicked in the Form.jsx component. This function accepts two parameters: prompt
and setPrompt
from the Form.jsx component.
In the generateImage
function, make an HTTP POST request to the Node.js server:
const generateImage = async (prompt, setPrompt) => {
if (prompt) {
try {
setIsGenerating(true);
const response = await fetch(
'http://localhost:8080/api',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
prompt,
}),
}
);
const data = await response.json();
setGeneratedImage({
photo: `data:image/jpeg;base64,${data.photo}`,
altText: prompt,
});
} catch (err) {
alert(err);
} finally {
setPrompt('');
setIsGenerating(false);
}
} else {
alert('Please provide proper prompt');
}
};
In the code above, you check if the prompt
parameter has a value, then set the isGenerating
state to true
since the operation is starting. This will make the loader appear on the screen because in the App.jsx file, we have this code controlling the loader display:
{isGenerating && (
<div> className="loader-comp">
<img src={Loader} alt="" className='loader-img' />
</div>
)}
Next, use the fetch()
method to make a POST request to the server using http://localhost:8080/api — this is why we installed CORS as we are interacting with an API on another URL. We use the prompt as the body of the message. Then, extract the response returned from the Node.js server and set it to the generatedImage
state.
Once the generatedImage
state has a value, the image will be displayed:
{generatedImage.photo ? (
<img
src={generatedImage.photo}
alt={generatedImage.altText}
className="imgg ai-img"
/>
) : (
<img
src={preview}
alt="preview"
className="imgg preview-img"
/>
)}
This is how your complete App.jsx file will look:
import { Form, Footer, Header } from './components';
import preview from './assets/preview.png';
import Loader from './assets/loader-3.gif'
import { downloadImage } from './utils';
import { useState } from 'react';
const App = () => {
const [isGenerating, setIsGenerating] = useState(false);
const [generatedImage, setGeneratedImage] = useState({
photo: null,
altText: null,
});
const generateImage = async (prompt, setPrompt) => {
if (prompt) {
try {
setIsGenerating(true);
const response = await fetch(
'http://localhost:8080/api',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
prompt,
}),
}
);
const data = await response.json();
setGeneratedImage({
photo: `data:image/jpeg;base64,${data.photo}`,
altText: prompt,
});
} catch (err) {
alert(err);
} finally {
setPrompt('');
setIsGenerating(false);
}
} else {
alert('Please provide proper prompt');
}
};
return (
<div className='container'>
<Header />
<main className="flex-container">
<Form generateImage={generateImage} prompt={prompt} />
<div className="image-container">
{generatedImage.photo ? (
<img
src={generatedImage.photo}
alt={generatedImage.altText}
className="imgg ai-img"
/>
) : (
<img
src={preview}
alt="preview"
className="imgg preview-img"
/>
)}
{isGenerating && (
<div className="loader-comp">
<img src={Loader} alt="" className='loader-img' />
</div>
)}
<button
className="btn"
onClick={() => downloadImage(generatedImage.photo)}
>
Download
</button>
</div>
</main>
<Footer />
</div>
);
};
export default App;
Deploy Your Full-Stack Application to Kinsta
So far, you have successfully built a React application that interacts with Node.js, which makes it a full-stack application. Let’s now deploy this application to Kinsta.
First, configure the server to serve the static files generated during the React application’s build process. This is achieved by importing the path
module and using it to serve the static files:
const path = require('path');
app.use(express.static(path.resolve(__dirname, './build')));
When you execute the command npm run build && npm run dev:backend
, your full stack React application will load at http://localhost:8080/. This is because the React application is compiled into static files within the build folder. These files are then incorporated into your Node.js server as a static directory. Consequently, when you run your Node server, the application will be accessible.
Before deploying your code to your chosen Git provider (Bitbucket, GitHub, or GitLab), remember to modify the HTTP request URL in your App.jsx file. Change http://localhost:8080/api
to /api
, as the URL will be prepended.
Finally, in your package.json file, add a script command for the Node.js server that would be used for deployment:
"scripts": {
// …
"start": "node server.js",
},
Next, push your code to your preferred Git provider and deploy your repository to Kinsta by following these steps:
- Log in to your Kinsta account on the MyKinsta dashboard.
- Select Application on the left sidebar and click the Add Application button.
- In the modal that appears, choose the repository you want to deploy. If you have multiple branches, you can select the desired branch and give a name to your application.
- Select one of the available data center locations.
- Add the
OPENAI_API_KEY
as an environment variable. Kinsta will set up a Dockerfile automatically for you. - Finally, in the start command field, add
npm run build && npm run start
. Kinsta would install your app’s dependencies from package.json, then build and deploy your application.
Summary
In this guide, you’ve learned how to harness the power of OpenAI’s DALL-E API for image generation. You have also learned how to work with React and Node.js to build a basic full-stack application.
The possibilities are endless with AI, as new models are introduced daily, and you can create amazing projects that can be deployed to Kinsta’s Application Hosting.
What model would you love to explore, and what project would you like us to write about next? Share in the comments below.
The post Build a React Application for AI-Powered Image Generation Using OpenAI DALL-E API appeared first on Kinsta®.
共有 0 条评论