Posted on Leave a comment

From Coder to Creator: How AI Will Revolutionize My Workflow in 2024!

close up of text on black background

Introduction

In the ever-evolving landscape of software development, the journey from coder to creator has taken on new dimensions with the advent of artificial intelligence (AI). As a developer immersed in this rapidly changing field, I’ve witnessed firsthand the transformative power of AI in revolutionizing traditional coding workflows. In 2024, I find myself at the precipice of a paradigm shift, embracing AI as a pivotal tool to streamline my development process and unlock new realms of creativity.

The traditional approach to coding has long been characterized by hours of manual labor, meticulously crafting lines of code to bring ideas to life. While this method has yielded remarkable innovations, it’s also rife with challenges: time constraints, repetitive tasks, and the constant battle against burnout. However, with the rise of AI-driven automation, a new era of possibility has emerged—one where developers can transition from mere coders to true creators, empowered by the capabilities of intelligent machines.

In this blog post, I invite you to join me on a journey into the heart of this transformation. We’ll explore how AI is reshaping the landscape of software development, revolutionizing workflows, and redefining the role of the modern developer. From automated code generation to AI-powered documentation and marketing, we’ll delve into the myriad ways in which AI is becoming an indispensable ally in the quest for innovation.

So, fasten your seatbelts and prepare to embark on a journey into the future of coding. Together, we’ll uncover the untapped potential of AI and discover how it’s paving the way for a new breed of creators in 2024 and beyond.

The Current State of Coding

Traditional Coding Landscape:

In the realm of software development, the foundation has long rested on the meticulous craftsmanship of code. Developers, the architects of digital innovation, have traditionally embarked on a journey of manual coding, sculpting intricate lines of logic to breathe life into their ideas.

Challenges Inherent in Manual Coding:

While this artisanal approach has led to remarkable technological feats, it is not without its challenges. The time-intensive nature of manual coding poses a significant hurdle. Developers often grapple with repetitive tasks, such as the creation of boilerplate code and exhaustive documentation. The linear progression of coding, while effective, can inadvertently stifle creativity, limiting developers to the constraints of their knowledge and experience.

The Conundrum of Productivity and Creativity:

In the contemporary landscape, where speed and innovation are paramount, the traditional coding paradigm faces a conundrum. The time and effort invested in manual coding can become a hindrance to the demand for increased productivity. The dichotomy between meeting tight deadlines and fostering creativity becomes more apparent as developers navigate through complex coding processes. This prompts a critical examination of the sustainability of the conventional approach.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

Introducing AI Prototyping

Defining AI Prototyping:

Enter AI prototyping, the harbinger of a new era in software development. AI prototyping harnesses the power of artificial intelligence to streamline and enhance the development process. At its core, AI prototyping automates various aspects of software creation, from generating code snippets to crafting comprehensive documentation and even assisting in marketing endeavors.

The Multifaceted Role of AI Agents:

AI agents, equipped with machine learning algorithms and natural language processing capabilities, serve as invaluable companions in the developer’s toolkit. They possess the ability to analyze vast datasets, decipher complex patterns, and generate tailored solutions to meet specific requirements. Through continuous learning and adaptation, these agents evolve alongside developers, becoming indispensable collaborators in pursuing innovation.

Unveiling the Benefits of AI Prototyping:

The adoption of AI prototyping heralds a plethora of benefits for developers. Firstly, it significantly enhances efficiency by automating routine tasks, thereby freeing up valuable time and resources. Moreover, AI-driven solutions boast unparalleled accuracy and reliability, minimizing the occurrence of errors and enhancing the overall quality of the development process. Furthermore, AI prototyping empowers developers to explore new realms of creativity, unencumbered by the constraints of manual labor. By seamlessly integrating AI into their workflows, developers can unleash their full potential as creators, envisioning and realizing groundbreaking innovations with unprecedented ease.

My Journey with AI in 2024

Embracing AI Integration:

In my journey as a developer in 2024, integrating AI into my workflow has been nothing short of transformative. Recognizing the potential of AI prototyping to revolutionize traditional coding practices, I embarked on a journey of exploration and experimentation. Through careful evaluation of available AI tools and platforms, I identified solutions that seamlessly complemented my development process, paving the way for enhanced productivity and innovation.

Leveraging AI Tools and Platforms:

From automated code generation to AI-driven documentation and marketing assistance, I embraced a diverse array of AI tools and platforms. These intelligent companions proved instrumental in streamlining various facets of my development workflow, allowing me to allocate more time and energy toward strategic problem-solving and creative imagination. By leveraging AI to tackle repetitive tasks and streamline workflows, I unlocked newfound levels of efficiency and agility in my development endeavors.

Navigating Successes and Challenges:

Throughout my journey with AI in 2024, I encountered a spectrum of successes and challenges. While the integration of AI yielded tangible benefits in terms of productivity and innovation, it also presented its fair share of obstacles. Adapting to new technologies and methodologies required patience and perseverance, as I navigated the intricacies of AI-driven development. However, through continuous experimentation and iteration, I overcame these challenges and emerged with a deeper appreciation for the transformative power of AI in software development.

The Impact on Creativity and Innovation

Empowering Creativity through AI Integration:

The infusion of AI into my development journey has been a revelation, sparking a renaissance of creativity and innovation. By leveraging AI to automate mundane tasks, I’ve been liberated to channel my creative energies into ideation and exploration. Freed from the constraints of repetitive coding and documentation, I’ve discovered newfound avenues for experimentation and invention. This symbiotic relationship between human ingenuity and AI-driven automation has fostered a dynamic ecosystem of creativity, where bold ideas flourish and innovation thrives.

Cultivating a Culture of Innovation:

At the organizational level, AI integration has catalyzed a cultural shift towards innovation and collaboration. By democratizing access to advanced AI tools and fostering a culture of experimentation, organizations can empower their teams to push the boundaries of what’s possible. Through cross-functional collaboration and iterative prototyping, developers can harness the power of AI to transform abstract concepts into tangible realities. This collaborative ethos fuels a virtuous cycle of innovation, where every idea is nurtured, refined, and ultimately brought to fruition with unprecedented speed and precision.

Realizing Visionary Ideas with AI Prototyping:

AI prototyping stands as a beacon of possibility, enabling developers to transform visionary concepts into tangible solutions with unparalleled efficiency. Through rapid prototyping and iterative refinement, developers can explore a multitude of possibilities, pushing the boundaries of innovation with each iteration. Whether it’s developing groundbreaking software applications or pioneering disruptive technologies, AI prototyping catalyzes realizing bold ideas and driving meaningful change. With AI as a trusted ally in the creative process, the only limit to innovation is the boundless imagination of the developer.

Overcoming Skepticism and Challenges

Addressing Common Concerns:

The integration of AI into the development landscape is not without its skeptics and naysayers. Common concerns, ranging from the perceived complexity of AI implementation to apprehensions about job displacement, warrant careful consideration. However, my experience has shown that understanding and addressing these concerns head-on is pivotal. Through transparent communication and education, the potential of AI becomes more apparent, demystifying misconceptions and fostering a more inclusive and informed approach.

Strategies for Maximizing AI Benefits:

Navigating the challenges posed by AI integration requires a strategic mindset. From selecting the right AI tools to fostering a culture of continuous learning, implementing robust strategies is essential. Encouraging collaboration between AI systems and human developers, rather than viewing them as adversaries, enhances the collective potential. Establishing clear guidelines for AI implementation, ensuring data privacy, and prioritizing ethical considerations are foundational elements in maximizing the benefits while mitigating challenges.

Guidance for Developers Delving into AI:

For developers contemplating the leap into AI-driven workflows, guidance is paramount. Offering training programs, mentorship initiatives, and access to resources facilitates a smoother transition. Acknowledging the learning curve and providing support structures can empower developers to embrace the possibilities presented by AI, transforming skepticism into enthusiasm and challenges into opportunities for growth.

Looking Ahead: The Future of AI in Software Development

Evolving Trends and Advancements:

As we gaze into the future, the trajectory of AI in software development promises to be nothing short of revolutionary. Emerging trends such as federated learning, generative adversarial networks (GANs), and reinforcement learning are poised to redefine the boundaries of what’s possible. These advancements, coupled with the democratization of AI tools and platforms, herald a new era of innovation and disruption in the software development landscape.

Envisioning AI’s Continued Impact:

Looking ahead, the influence of AI in software development will continue to permeate every facet of the development lifecycle. From accelerating prototyping and enhancing user experiences to optimizing performance and facilitating autonomous decision-making, AI will play an increasingly pivotal role. Moreover, the symbiotic relationship between human creativity and AI-driven automation will drive unprecedented levels of innovation, empowering developers to realize their boldest visions.

Call to Embrace AI-Driven Innovation:

As we stand on the cusp of this transformative journey, the call to embrace AI-driven innovation grows louder. By embracing AI as a catalyst for creativity and a tool for empowerment, developers can unlock boundless possibilities and chart a course toward a future limited only by imagination. Together, let us embrace the transformative power of AI and embark on a journey of innovation, collaboration, and limitless potential.

Conclusion

In conclusion, the transition from coder to creator in 2024 is not merely a shift in perspective but a seismic evolution in the software development paradigm. Through the integration of AI prototyping, developers are empowered to transcend the constraints of traditional coding, unleashing their full creative potential and redefining the boundaries of innovation. As we navigate this exhilarating journey together, let us embrace the transformative power of AI and embark on a path toward a future where creativity knows no bounds.

Thank you for joining me on this exploration of how AI will revolutionize my workflow in 2024. Here’s to a future fueled by innovation, collaboration, and the limitless possibilities of AI.

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.

Posted on Leave a comment

Exploring the Difference Between Artificial Life and Biological Life in AI Ethics

Introduction

AI ethics is a complex and thought-provoking field that raises important questions about the impact and boundaries of artificial intelligence. One particular topic of interest is the distinction between artificial life and biological life. In this blog post, we will delve into this subject, exploring the characteristics that set artificial life apart from biological life and the ethical considerations that arise from this distinction.

Defining Artificial Life and Biological Life

Artificial life refers to life forms that are created or simulated by artificial intelligence or other human-made systems. These life forms are often based on programmed instructions or algorithms, allowing them to exhibit behavior and adapt to their environment. On the other hand, biological life refers to life forms that exist naturally and evolve through biological processes such as reproduction and genetic inheritance.

Origins and Replication

Artificial life is brought into existence through human intervention or design. It is the result of intentional programming or simulation. In contrast, biological life originates through natural processes, with life forms arising from the reproduction of existing organisms. Biological life has a long history of evolution and adaptation, while artificial life is born from deliberate human action.

Autonomy and Consciousness

Autonomy and consciousness are fundamental aspects of life that differentiate artificial life from biological life. While artificial life exhibits a certain level of autonomy and decision-making capabilities based on programmed instructions or learning algorithms, current AI systems have limitations that prevent them from achieving true autonomy or consciousness.

Artificial life possesses a degree of autonomy that allows it to process information, make decisions, and exhibit behavior based on its programming. For example, autonomous vehicles use AI algorithms to navigate roads, interpret traffic signals, and respond to various driving situations. However, their autonomy is bounded by the predefined rules and algorithms set by human programmers.

Consciousness, on the other hand, refers to subjective awareness and self-perception. It involves the ability to have experiences, emotions, and a sense of self. While artificial life may simulate certain aspects of consciousness, such as recognizing objects or engaging in natural language processing, it falls short of possessing true subjective consciousness. For example, a ChatGPT Discord bot is not going to be conscious.

Biological life, in contrast, exhibits a remarkable degree of autonomy and consciousness. Living organisms have intricate nervous systems that allow for complex cognitive processes and subjective experiences. Humans, for instance, have self-awareness, introspection, and the capacity for introspective thought. We possess a rich inner world of thoughts, emotions, and sensations that contribute to our conscious experience.

The distinction between artificial life and biological life becomes apparent when we consider the limitations of AI systems in achieving true autonomy and consciousness. AI algorithms are designed to process information and make decisions based on predetermined rules or patterns, but they lack the deep and multi-faceted awareness inherent in biological organisms.

As AI research progresses, there are ongoing debates about the possibility of creating artificial life forms with consciousness. Some argue that as AI becomes more sophisticated and capable of complex learning and adaptation, it may eventually achieve a level of consciousness comparable to biological beings. Others contend that consciousness is not solely a product of computation but involves biological, embodied, and experiential aspects that cannot be replicated by purely artificial means.

Exploring the boundaries of autonomy and consciousness in artificial life raises profound philosophical and ethical questions. It prompts us to consider the nature of consciousness, the implications of creating conscious artificial entities, and the ethical responsibilities associated with their development and treatment.

Adaptation and Evolution

Adaptation and evolution are key processes that distinguish biological life from artificial life. While artificial life can be programmed to adapt to changing environments and display evolutionary-like behavior, its adaptation is limited to the scope of its programming and lacks the dynamic and unpredictable nature of biological evolution.

Artificial life is designed to adapt to specific conditions or challenges within the parameters set by its creators. For example, machine learning algorithms can optimize their performance based on feedback and data, allowing AI systems to improve their accuracy and efficiency over time. Genetic algorithms, another example of artificial life, employ a process of mutation and selection to optimize solutions to specific problems.

However, the adaptation of artificial life is constrained by the preconceived rules, algorithms, and objectives imposed upon it. It lacks the inherent capacity for open-ended exploration and innovation that characterizes biological evolution. Artificial life forms cannot undergo the kind of continuous and unpredictable changes that lead to the emergence of new species or the development of novel traits observed in the natural world.

In contrast, biological life forms undergo adaptation and evolution through a combination of genetic variation, natural selection, and environmental factors. Through genetic recombination and mutation, organisms develop unique genetic traits that provide advantages or disadvantages in their specific environments. These variations are then subjected to the process of natural selection, where individuals with beneficial traits are more likely to survive and reproduce, passing those advantageous traits to future generations.

The dynamic nature of biological evolution allows for the emergence of new species and the continuous diversification of life forms. The interplay between genetic variation, natural selection, and environmental pressures leads to the incredible biodiversity and complexity observed in the natural world.

The distinction between the adaptation and evolution of artificial life and biological life has ethical implications. While artificial life can be designed to serve specific purposes or solve particular problems, the unintended consequences and potential limitations of these design choices must be carefully considered. Artificial life can never fully replicate the intricate and nuanced processes of biological evolution, which have shaped ecosystems over millions of years.

Moreover, the responsible development and deployment of artificial life require careful ethical considerations. The intentional creation and manipulation of life forms raise questions about the potential impacts on existing ecosystems, the risks of unintended consequences, and the balance between human agency and the preservation of natural systems.

By studying and understanding the mechanisms of biological adaptation and evolution, we can gain insights into the complexity and resilience of natural life. This understanding can inform the responsible design and implementation of artificial life, ensuring that it aligns with ethical principles and avoids undue harm or disruption.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

Ethical Considerations

The distinction between artificial life and biological life carries significant ethical implications. The creation and treatment of artificial life raise profound questions of responsibility and moral agency. Consider the following examples:

  1. Rights and Protections: Should artificial life forms be granted rights and protections? As we create intelligent and autonomous artificial entities, it becomes crucial to examine whether they should be afforded certain rights and ethical considerations. This includes questions about their well-being, freedom from harm, and the potential for exploitation or abuse.
  2. Consequences and Implications: What are the potential consequences and implications of creating artificial life? Introducing artificial life forms into society can have far-reaching effects. We need to assess the impact on existing ecosystems, potential disruptions to the balance of nature, and the ethical implications of creating life with specific functions or purposes. Additionally, considering the potential misuse or unintended consequences of artificial life raises concerns about the responsible development and use of AI.

These ethical considerations urge us to reflect on the ethical dimensions of our actions as creators and the long-term implications of introducing artificial life forms into our world.

Philosophical and Metaphysical Perspectives

Beyond the realm of ethics, the distinction between artificial life and biological life leads us into philosophical and metaphysical territories. Philosophical viewpoints on the nature of life and consciousness, such as materialism, dualism, and panpsychism, offer different perspectives on the possibility of artificial life and its relationship to biological life. Exploring these perspectives encourages deep reflection on the fundamental nature of life itself.

Conclusion

Understanding the difference between artificial life and biological life is crucial for navigating the ethical implications of artificial intelligence. As we continue to push the boundaries of AI, it is important to consider the limitations and unique qualities of artificial life compared to the intricate tapestry of biological life. By engaging in thoughtful discourse and ethical reflection, we can shape a future where AI is developed responsibly and in alignment with our values.

We hope this blog post has shed light on the distinction between artificial life and biological life and the ethical considerations that arise from it. We encourage you to continue exploring this fascinating topic and to share your thoughts on the matter.

Thank you for reading!

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.

Posted on Leave a comment

5 Ways ChatGPT Can Augment Your Engineering Team

robot pointing on a wall

Introduction

As technology advances, so do the challenges faced by engineering teams. The complexities of modern software development can be overwhelming, from the ever-growing list of programming languages and frameworks to the increasing need for automation and speed. This is where ChatGPT comes in. ChatGPT is an AI language model that can help your engineering team in a number of ways, from speeding up research and development to improving code quality and documentation. In this blog post, we’ll explore the benefits of using ChatGPT in your engineering team and how it can help you build better software.

  1. Research and Development

One of the biggest challenges faced by engineering teams is staying up-to-date with the latest technologies and techniques. With so much information available online, it can be difficult to know where to start. ChatGPT can help by providing relevant information and insights based on your team’s queries. Whether you need help finding a solution to a specific problem or want to know more about a particular technology, ChatGPT can quickly generate accurate and informative responses.

  1. Code Reviews

Code reviews are a critical part of the software development process, but they can be time-consuming and often require a lot of manual effort. ChatGPT can help by automating the code review process, providing suggestions for code improvements based on best practices and industry standards. This can help your team save time and focus on other tasks, while still ensuring that your code meets the highest standards of quality.

  1. Documentation

Documentation is another critical aspect of software development, but it can be a tedious and time-consuming task. ChatGPT can help by automatically generating documentation based on your codebase and other relevant information. This can save your team a lot of time and effort, while still ensuring that your documentation is accurate and up-to-date.

  1. Testing and Quality Assurance

Testing and quality assurance are essential parts of the software development process, but they can also be complex and time-consuming tasks. ChatGPT can help by providing suggestions for testing strategies and identifying potential issues based on your team’s input. This can help your team identify and address issues more quickly, improving the quality and reliability of your software.

  1. Productivity

ChatGPT can also help improve the productivity of your engineering team by providing assistance with tasks such as project management and communication. By integrating ChatGPT with other tools and platforms, such as project management tools and chatbots, your team can stay organized and focused on their tasks. This can help reduce distractions and improve efficiency, allowing your team to deliver higher-quality software more quickly.

From speeding up research and development to improving code quality and documentation, ChatGPT can help your team become more efficient and effective. However, getting your team acclimated to ChatGPT can be a challenge. In this blog post, we’ll explore some tips and best practices for introducing ChatGPT to your engineering team and ensuring a smooth transition.

  1. Provide Training and Education

Before introducing ChatGPT to your team, it’s important to provide adequate training and education. This can include tutorials on how to use ChatGPT, best practices for integrating it into your workflow, and examples of how it can benefit your team. By providing the right resources and information, you can help your team feel more confident and prepared to use ChatGPT effectively.

  1. Identify Key Use Cases

It’s important to identify key use cases for ChatGPT within your team’s workflow. This can help your team understand how ChatGPT can benefit them and where it fits into their daily tasks. For example, you might identify use cases such as automating code reviews or generating documentation. By identifying these key use cases, you can help your team see the value of ChatGPT and how it can help them work more efficiently.

  1. Start Small

When introducing ChatGPT to your team, it’s important to start small. This can mean introducing it on a small project or task, or only using it for specific use cases. By starting small, you can help your team become more familiar with ChatGPT and build confidence in its abilities. As they become more comfortable with ChatGPT, you can gradually increase its use and expand its functionality.

  1. Encourage Feedback

As your team begins to use ChatGPT, it’s important to encourage feedback. This can include feedback on its effectiveness, ease of use, and any issues or challenges that arise. By soliciting feedback from your team, you can identify areas for improvement and make adjustments as needed. Additionally, by involving your team in the process, you can help build a sense of ownership and investment in ChatGPT’s success.

  1. Integrate ChatGPT with Existing Tools and Processes

To ensure a smooth transition, it’s important to integrate ChatGPT with your team’s existing tools and processes. This can mean integrating it with project management tools, communication platforms, or other software tools that your team uses regularly. By integrating ChatGPT with these tools, you can help ensure that it becomes a seamless part of your team’s workflow.

  1. Provide Support and Resources

Finally, it’s important to provide ongoing support and resources as your team begins to use ChatGPT. This can include documentation, tutorials, and a dedicated point of contact for any issues or questions that arise. By providing the right resources and support, you can help ensure that your team is able to use ChatGPT effectively and get the most out of its capabilities.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

Conclusion

In conclusion, ChatGPT can be an invaluable asset to your engineering team. Whether you need help with research and development, code reviews, documentation, testing and quality assurance, or productivity, ChatGPT can provide accurate and informative insights and suggestions. By working alongside ChatGPT, your team can become more efficient and effective, allowing you to build better software more quickly and with less effort. So if you’re looking for ways to improve your engineering team’s productivity and quality, consider integrating ChatGPT into your workflow today. Getting your engineering team acclimated to ChatGPT can be a challenge, but it’s worth the effort. By providing training and education, identifying key use cases, starting small, encouraging feedback, integrating ChatGPT with existing tools and processes, and providing ongoing support and resources, you can help ensure a smooth transition and maximize the benefits of ChatGPT for your team. So if you’re considering introducing ChatGPT to your engineering team, follow these best practices and watch as your team becomes more efficient and effective than ever before.

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.

Posted on 4 Comments

Create A ChatGPT Discord Bot With Slash Commands

Adding ChatGPT and OpenAI To Your Discord Server

I love writing Discord bots, my favorite one was the Discord Twitter Bot. Now that ChatGPT API is available to developers I decided to create an OpenAI ChatGPT discord bot in Node.js and share the source code with you. This bot adds a slash command to your server called /generate-prompt that takes in a prompt string and the bot returns a result using the Text Completion API.

MastaGPT Bot in action

The Source Code

This is a dockerized node.js application that uses GitHub actions to deploy the Docker container to Docker Hub and GitHub Container Registry. The index.js file loads an instance of OpenAI and Discord.js, loads the slash commands from a commands directory and registers them with Discord. It then listens for interactions i.e. a user using the slash command and then calls the generate method to use the gpt-3.5-turbo OpenAI language model to generate a response and reply to that message in Discord.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

Package.json

Below is an example of how you might want your package.json file to look like.

{
  "name": "discord-gpt-bot",
  "version": "1.1.0",
  "description": "Add ChatGPT to your Discord server. Responds with a ChatGPT generated text when @.",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/mastashake08/discord-gpt-bot.git"
  },
  "keywords": [
    "OpenAI",
    "ChatGPT",
    "gpt",
    "discord",
    "bots",
    "discord"
  ],
  "author": "Mastashake08",
  "license": "GPL3",
  "bugs": {
    "url": "https://github.com/mastashake08/discord-gpt-bot/issues"
  },
  "homepage": "https://github.com/mastashake08/discord-gpt-bot#readme",
  "dependencies": {
    "discord.js": "^14.7.1",
    "dotenv": "^16.0.3",
    "openai": "^3.2.1"
  }
}

Index File

This is where most of the magic happens, the index.js file loads our slash commands starts OpenAI, and starts the Discord.js instance. All secret keys and tokens are loaded using the dotenv package from a .env file. The generate function makes a call to the OpenAI.createCompletion() function which returns our text completion.

require('dotenv').config()
const { Client, Events, Collection, REST, Routes } = require('discord.js');
const fs = require('node:fs');
const path = require('node:path');
const { Configuration, OpenAIApi } = require("openai")
const client = new Client({ intents: 2048 })
client.commands = new Collection()
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

async function generate(prompt, model="gpt-3.5-turbo") {
  const completion = await openai.createCompletion({
    model: model,
    prompt: prompt
  });
  const text = completion.data.choices[0].text
  return text;
}


// start the discord client and listen for messages
const commandsPath = path.join(__dirname, 'commands');
const commandFiles = fs.readdirSync(commandsPath).filter(file => file.endsWith('.js'));

const commands = []
// Grab the SlashCommandBuilder#toJSON() output of each command's data for deployment
for (const file of commandFiles) {
	const command = require(`./commands/${file}`);
	commands.push(command.data.toJSON());
}

// Construct and prepare an instance of the REST module
const rest = new REST({ version: '10' }).setToken(process.env.DISCORD_TOKEN);

// and deploy your commands!
(async () => {
	try {
		console.log(`Started refreshing ${commands.length} application (/) commands.`);

		// The put method is used to fully refresh all commands in the guild with the current set
		const data = await rest.put(
			Routes.applicationGuildCommands(process.env.DISCORD_CLIENT_ID, process.env.DISCORD_GUILD_ID),
			{ body: commands },
		);

		console.log(`Successfully reloaded ${data.length} application (/) commands.`);
	} catch (error) {
		// And of course, make sure you catch and log any errors!
		console.error(error);
	}
})();

client.login(process.env.DISCORD_TOKEN)
  client.on(Events.InteractionCreate, async interaction => {
    if (interaction.commandName === 'generate-prompt') {
      const res = await generate(interaction.options.getString('prompt'))
      await interaction.reply({ content: res });
    }
    });

Commands

I created a commands directory and inside created a prompt.js file. This file is responsible for using the SlashCommandBuilder class from DIscord.js to create our command and options.

require('dotenv').config()

const { Configuration, OpenAIApi } = require("openai")
const { SlashCommandBuilder } = require('discord.js');
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

async function generate(prompt, model="gpt-3.5-turbo") {
  const completion = await openai.createCompletion({
    model: model,
    prompt: prompt
  });
  const text = completion.data.choices[0].text
  return text;
}
module.exports = {
	data: new SlashCommandBuilder()
   .setName('generate-prompt')
   .setDescription('Generate a ChatGPT reponse!')
   .addStringOption(option =>
     option
     .setName('prompt')
     .setDescription('The prompt to generate')
     .setRequired(true)
    ),

	async execute(interaction) {
    const res = await generate(interaction.options.getString('prompt'))
		await interaction.reply(res);
	},
};

Dockerfile

I created a Dockerfile so that anyone can run this application without having to build from source. It creates a node 16 image, copies the code files over, runs npm install , then runs the command. By passing in an --env-file flag to the docker run command will pass in a .env file to the script.

# syntax=docker/dockerfile:1

FROM node:16.15.0
ENV NODE_ENV=production

WORKDIR /

COPY ["package.json", "package-lock.json*", "./"]

RUN npm install --production

COPY . .

CMD [ "node", "index.js" ]

GitHub Actions

Automating the build process is the final part of the project. Whenever I release a new tagged version of the code, the GitHub action packages the Docker Image and publishes it to Docker Hub as well as Github Container Registry. From there I either run the docker image locally on my Raspberry Pi or I will run it in the cloud.

# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.

name: Publish Docker image to Docker Hub

on:
  release:
    types: [published]

jobs:
  push_to_registry:
    name: Push Docker image to Docker Hub
    runs-on: ubuntu-latest
    steps:
      - name: Check out the repo
        uses: actions/checkout@v3

      - name: Log in to Docker Hub
        uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
        with:
          images: ${{ github.repository }}

      - name: Build and push Docker image
        uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.

# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.

name: Create and publish a Docker image to Github Packages

on:
  release:
    types: [published]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  build-and-push-image:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v3

      - name: Log in to the Container registry
        uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}

      - name: Build and push Docker image
        uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

Usage

Via Node

cp .env.example .env

# Set variables
DISCORD_TOKEN=
DISCORD_CHANNEL_ID=
DISCORD_CLIENT_ID=
DISCORD_GUILD_ID=
OPENAI_API_KEY=

# Call program
node index.js

Via Docker

docker run  --env-file=<PATH TO .ENV> -d --name=<NAME> mastashake08/discord-gpt-bot:latest

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.

Posted on 2 Comments

Creating A Web IoT Javascript Package: NFC, Bluetooth, Web Serial, Web USB

HTML and IoT

IoT has been increasing in relevance over the past decade. The idea of connecting physical devices to a website has always intrigued me. Bringing the physical world into the digital in my opinion brings us closer together globally.

In this tutorial, I will go over how to build a WebIOT package so you can add IoT capabilities to your Javascript applications. Please like, comment, and share this article, it really helps.

Introducing the WebIoT NPM Package

Following my FOSS commitment for 2023 (check out SpeechKit and Laravel OpenAI API) my March package is called WebIOT it brings together a collection of Web APIs to easily give developers functions for interacting with IoT devices. It has classes for NFC, Bluetooth, Serial and USB. I am actively looking for PRs and constructive criticism.

Web NFC

The Web NFC API is a cool API that allows for interaction with NFC chips. It consists of a message, a reader and a record.

The Web NFC API allows exchanging data over NFC via light-weight NFC Data Exchange Format (NDEF) messages.

https://developer.mozilla.org/en-US/docs/Web/API/Web_NFC_API

You can get NFC chips really cheap on Amazon (use this link and I get a commission !) You can use NFC for all sorts of cool interactive things.

  • Extend offline activity: NFC is an offline technology, it doesn’t need to be connected to a network in order to exchange data, it gets its electricity from the close contact radio wave exchange hitting the wire (yay physics). A cool implementation is adding real-world nodes for your web game, when users tap it they get a special prize.
  • IoT device configurations: You can have users on your website get configuration data for your IoT devices without them having to download any additional software. This is extremely useful when paired with the Web Bluetooth API for GATT server configs.
  • Sending data to devices: NFC is a secure way to write data to your IoT devices and let those handle the processing

Web Bluetooth

The Web Bluetooth API provides the ability to connect and interact with Bluetooth Low Energy peripherals.

https://developer.mozilla.org/en-US/docs/Web/API/Web_Bluetooth_API

The Web Bluetooth API allows developers to connect to Bluetooth LE devices and read and write data. Some useful implementations of Web Bluetooth

  • Get local device updates
  • Run webpage functionality based on device state i.e. a heart monitor make an animation run on certain BPM

Web Serial

The Web Serial API provides a way for websites to read from and write to serial devices. These devices may be connected via a serial port, or be USB or Bluetooth devices that emulate a serial port.

https://developer.mozilla.org/en-US/docs/Web/API/Web_Serial_API

Web Serial allows us to connect to generic serial ports and interact with our devices. This means we can do things like connect our webpages to embedded devices such as a Raspberry Pi.

Web USB

The WebUSB API provides a way to expose non-standard Universal Serial Bus (USB) compatible devices services to the web, to make USB safer and easier to use.

https://developer.mozilla.org/en-US/docs/Web/API/WebUSB_API

The Web USB API allows us to work directly with USB peripherals and by extension if those devices has programs, run them.

Current Browser Limitations

Most of these APIs cannot be used on iOS currently.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

The Source Code

Init The Project

Creating a new project and running the npm create script

mkdir web-iot && cd web-iot
npm init

The package.json looks like this:

{
  "name": "@mastashake08/web-iot",
  "version": "1.0.0",
  "description": "Connect to your IoT devices via usb, serial, NFC or Bluetooth",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/mastashake08/web-iot.git"
  },
  "keywords": [
    "iot",
    "webserial",
    "bluetooth",
    "webusb",
    "nfc"
  ],
  "author": "Mastashake",
  "license": "MIT",
  "bugs": {
    "url": "https://github.com/mastashake08/web-iot/issues"
  },
  "homepage": "https://github.com/mastashake08/web-iot#readme"
}

The index.js file

This is the entry point for the package and it simply exports our classes

import { WebIOT } from './classes/WebIOT'
import { NFCManager } from './classes/NFCManager'
import { BluetoothManager } from './classes/BluetoothManager'
import { SerialManager } from './classes/SerialManager'
import { USBManager } from './classes/USBManager'
export {
  WebIOT,
  NFCManager,
  BluetoothManager,
  SerialManager,
  USBManager
}

The WebIOT Class

This is the base class for all our managers. It contains some functions for sending data to a remote server, a functionality that’s usually needed when dealing with IoT devices.

export class WebIOT {
  debug = false
  constructor(debug = false) {
    this.debug = debug
  }
  sendData (url, options = {}, type='fetch') {
      try {
        switch (type) {
          case 'fetch':
            return this.sendFetch(url, options)
            break;
          case 'beacon':
            return this.sendBeacon(url, options)
            break;
          default:
            return this.sendFetch(url, options)
        }
      } catch (e) {
        this.handleError(e)
      }
  }

  sendBeacon (url, data) {
    try {
      navigator.sendBeacon(url, data)
    } catch (e) {
      this.handleError(e)
    }
  }

  async sendFetch(url, options) {
    try {
      const res = await fetch(url, options)
      if (res.status != 200) {
        throw new Error(`HTTP error! Status: ${res.status}`)
      } else {
        return res
      }
    } catch (e) {
      this.handleError(e)
    }
  }

  handleError (e) {
    if(this.debug) {
      alert(e.message)
      console.log(e.message)
    } else {
      throw e
    }
  }
}

The USBManager

The USBManager is responsible for working with USB devices.

import { WebIOT } from './WebIOT'
export class USBManager extends WebIOT{
  #devices = {}
  #selectedDevice = {}
  constructor (debug = false) {
    super(debug)
  }

  async getDevices () {
    this.devices = await navigator.usb.getDevices()
    return this.devices
  }

  async requestDevice(options = {}) {
    this.selectedDevice = this.selectedDevice = await navigator.usb.requestDevice(options)
    return this.selectedDevice
  }

  async openDevice() {
    await this.connectDevice()
  }

  async closeDevice(options) {
    await this.selectedDevice.close()
  }

  async connectDevice() {
    await this.selectedDevice.open();
    if (this.selectedDevice.configuration === null)
      await this.selectedDevice.selectConfiguration(1);
    await this.selectedDevice.claimInterface(0);
    return this.selectedDevice

  }

  async writeData(endpointNumber, data) {
    return await this.selectedDevice.transferOut(endpointNumber, data)
  }
  async readData(endpointNumber, data) {
    return await this.selectedDevice.transferIn(endpointNumber, data)
  }
}

}

The SerialManager

The SerialManager class manages serial connections.

import { WebIOT } from './WebIOT'
export class SerialManager extends WebIOT {
  #ports = {}
  #selectedPort = {}
  constructor (debug = false) {
    super(debug)
  }

  async getPorts () {
    this.ports = await navigator.serial.getPorts()
    return this.ports
  }

  async requestPort(options = {}) {
    this.selectedPort = this.selectedPort = await navigator.serial.requestPort(options)
    return this.selectedPort
  }

  async openPort(options) {
    await this.selectedPort.open(options)
  }

  async closePort(options) {
    await this.selectedPort.close()
  }

  async getInfo() {
    await this.selectedPort.getInfo()
  }

  async setSignals(options) {
    await this.selectedPort.setSignals(options)
  }

  async getSignals() {
    return await this.selectedPort.getSignals()
  }

  async readData () {
    const reader = this.selectedPort.readable.getReader();

    // Listen to data coming from the serial device.
    while (true) {
      const { value, done } = await reader.read();
      if (done) {
        // Allow the serial port to be closed later.
        reader.releaseLock();
        break;
      }
      // value is a Uint8Array.
      return value
    }
  }

  async writeData(data) {
    const writer = port.writable.getWriter();

    await writer.write(data);
    // Allow the serial port to be closed later.
    writer.releaseLock();
  }
}

The BluetoothManager

The BluetoothManager is responsible for managing Bluetooth devices

import { WebIOT } from './WebIOT'
export class BluetoothManager extends WebIOT {
  #bluetooth = {}
  #device = {}
  #server = {}
  #selectedService = {}
  #services = {}
  #characteristic = {}
  #currentValue = null
  constructor (debug = false) {
    super(debug)
    navigator.bluetooth.getAvailability().then((available) => {
      if (available) {
        this.bluetooth = navigator.bluetooth
      } else {
        alert("Doh! Bluetooth is not supported");
      }
    });

  }

  async getDevices (options = {acceptAllDevices: true}) {
    return await this.requestDevice(options)
  }

  async requestDevice (options) {
    try {
      this.device = await navigator.bluetooth.requestDevice(options)
      return this.device
    } catch(e) {
      alert(e.message)
    }
  }

  async connectToServer () {
    this.server = await this.device.gatt.connect()
  }

  async getService (service) {
    this.selectedService = await this.server.getPrimaryService(service)
    return this.selectedService
  }

  async getServices () {
    this.services = await this.server.getPrimaryServices()
    return this.services
  }

  async getCharacteristic (char) {
    this.characteristic = await this.selectedService.getCharacteristic(char)
    return this.characteristic
  }

  async getCharacteristics () {
    return await this.selectedService.getCharacteristics()
  }

  async getValue () {
    this.currentValue = await this.characteristic.readValue()
    return this.currentValue
  }

  async writeValue(data) {
    await this.characteristic.writeValue(data)
  }
}

The NFCManager

Work with NFC tags with the NFCManager

import { WebIOT } from './WebIOT'
export class NFCManager extends WebIOT {
  #nfc = {}
  constructor (debug = false) {
    super(debug)
    if ('NDEFReader' in window) { /* Scan and write NFC tags */
      this.nfc = new NDEFReader()
    } else {
      alert('NFC is not supported in your browser')
    }

  }

  startNFC () {

    this.nfc = new NDEFReader()
  }
  async readNFCData (readCb, errorCb = (event) => console.log(event)) {
    this.nfc.onreading = readCb()
    await this.nfc.scan()
  }
  async writeNFCData (records, errorCb = (event) => console.log(event)) {
    try {
      await this.nfc.write(records)
    } catch (e) {
      errorCb(e)
    }
  }
  async lockNFCTag(errorCb = (event) => console.log(event)) {
    try {
      await this.nfc.makeReadOnly()
    } catch(e) {
      errorCb(e)
    }
  }
  static generateNFC () {
    return new NDEFReader()
  }
}

Using It In Action

USB Example

Select a device and read/write data to pin 4

import { USBManager } from '@mastashake08/web-iot'

....
const usb = new USBManager()

// get devices
const devices = usb.getDevices()

// request a single device
const device = usb.requestDevice()

// open device after connecting to it
device = usb.openDevice()

// read 64 bytes of data from pin 4 on device
const readData = usb.readData(4, 64)

// write 64 bytes of data to pin 4
usb.writeData(4, new Uint8Array(64))

Serial Example

Have a user select a serial device and write 64 bytes of data to it

import { SerialManager } from '@mastashake08/web-iot'

....

const serial = new SerialManager()

// get a port
port = serial.requestPort()

// read data
const data = serial.readData()

// write 64 bytes data
serial.writeData(new Uint8Array(64))

Bluetooth Example

Let a user select a Bluetooth device and get the battery level

import { BluetoothManager } from '@mastashake08/web-iot'

....

const bt = new BluetoothManager()

// get a device
const device = bt.requestDevice(options)

// get services
const services = bt.getServices()

// get battery service
const service = bt.getService('battery_service')

// get batter level
const char = bt.getCharacteristic('battery_level')

// get battery value
const battery = bt.getValue()

//write value

bt.writeValue(new Uint8Array(64))

NFC Example

Read, Write, and lock tags

import { NFCManager } from '@mastashake08/web-iot'

....
// start NFC
const nfc = new NFCManager()
nfc.startNFC()

//Read a tag
const data = nfc.readNFCData(successCb, errorCb)
const writeData = "Hello World"

// Write to a tag
nfc.writeNFCData(writeData)

// Lock tag
nfc.lockNFCTag(errorCb)


Did You Enjoy This Tutorial?

If so please leave a comment and like this article and please share it on social media! I post weekly so please come back for more content!

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.

Posted on 2 Comments

Creating A WebTransport NPM Package – ShakePort

  1. Intro
  2. What Is HTTP/3
  3. Dissecting The WebTransport Class
  4. WebTransport Use Cases
  5. Why I Created ShakePort
  6. The Code

People Are SLEEPING on WebTransport!

Of all my favorite HTML APIs the WebTransport API is definitely TOP 3. MDN explains the WebTransport API as such:

The WebTransport interface of the WebTransport API provides functionality to enable a user agent to connect to an HTTP/3 server, initiate reliable and unreliable transport in either or both directions, and close the connection once it is no longer needed.

https://developer.mozilla.org/en-US/docs/Web/API/WebTransport

It allows a web page to connect to a bidirectional HTTP/3 server to send and receive UDP datagrams either reliably or unreliably. Seriously why is no one talking about this?! In this article/tutorial, I will go through the process of creating an NPM package called ShakePort

What Is HTTP/3

HTTP/3 is the third major version of the Hypertext Transfer Protocol used to exchange information on the World Wide Web, complementing the widely-deployed HTTP/1.1 and HTTP/2. Unlike previous versions which relied on the well-established TCP (published in 1974),[1] HTTP/3 uses QUIC, a multiplexed transport protocol built on UDP.[2] On 6 June 2022, IETF published HTTP/3 as a Proposed Standard in RFC9114.[3]

https://en.wikipedia.org/wiki/HTTP/3

HTTP/3 is 3x faster than HTTP/1.1 and has much less latency than its predecessors. Built using QUIC for data transfer using UDP instead of TCP. What this means for real-time sensitive applications is faster data faster execution.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

Dissecting The WebTransport Class

Constructor

The constructor for the WebTransport takes in a url and an options parameter. The URL points to an instance of a HTTP/3 server to connect to. The options parameter is an optional variable that passes in a JSON object

url

A string representing the URL of the HTTP/3 server to connect to. The scheme needs to be HTTPS, and the port number needs to be explicitly specified.options Optional

An object containing the following properties:serverCertificateHashes Optional

An array of WebTransportHash objects. If specified, it allows the website to connect to a server by authenticating the certificate against the expected certificate hash instead of using the Web public key infrastructure (PKI). This feature allows Web developers to connect to WebTransport servers that would normally find obtaining a publicly trusted certificate challenging, such as hosts that are not publicly routable, or ephemeral hosts like virtual machines.

WebTransportHash objects contain two properties:algorithm

A string representing the algorithm to use to verify the hash. Any hash using an unknown algorithm will be ignored.value

BufferSource representing the hash value.

We instantiate a new instance of WebTransport with the following command

new WebTransport(url, options) 

Instance Properties

The closed read-only property of the WebTransport interface returns a promise that resolves when the transport is closed.

The datagrams read-only property of the WebTransport interface returns a WebTransportDatagramDuplexStream instance that can be used to send and receive datagrams — unreliable data transmission.

“Unreliable” means that transmission of data is not guaranteed, nor is arrival in a specific order. This is fine in some situations and provides very fast delivery. For example, you might want to transmit regular game state updates where each message supersedes the last one that arrives, and order is not important.

The incomingBidirectionalStreams read-only property of the WebTransport interface represents one or more bidirectional streams opened by the server. Returns a ReadableStream of WebTransportBidirectionalStream objects. Each one can be used to reliably read data from the server and write data back to it.

The incomingUnidirectionalStreams read-only property of the WebTransport interface represents one or more unidirectional streams opened by the server. Returns a ReadableStream of WebTransportReceiveStream objects. Each one can be used to reliably read data from the server.

The ready read-only property of the WebTransport interface returns a promise that resolves when the transport is ready to use.

Instance Methods

The close() method of the WebTransport interface closes an ongoing WebTransport session.

The createBidirectionalStream() method of the WebTransport interface opens a bidirectional stream; it returns a WebTransportBidirectionalStream object containing readable and writable properties, which can be used to reliably read from and write to the server.

The createUnidirectionalStream() method of the WebTransport interface opens a unidirectional stream; it returns a WritableStream object that can be used to reliably write data to the server.

Use Cases For WebTransport

Web Gaming

WebTransport allows not only for better performance web games in multiplayer, but it also allows for CROSS-SYSTEM PLAY! Since WebTransport is an HTTP/3 web standard you can implement it on the web, PlayStation, Xbox, and whatever else may come in the future! You may be wondering why use WebTransport instead of WebRTC. For 1 v 1 multiplayer games WebRTC will do just fine, but what if you want to build a Battle Royale style battle game that’s 50 v 50? If you are using WebRTC you will run into latency issues because WebRTC is peer-to-peer whereas WebTransport is client-server. Using UDP packets so order does not matter which is what you want for gaming.

Free hand holding gaming console
WebTransport is a major win for gaming in the web.

IOT

With WebTransport communicating with IOT devices via the web just got a whole lot easier. You can manage a fleet of hardware devices, get analytical data and issue commands in real-time. I am currently using a WebTransport server on my Raspberry Pi 4 and a Vue frontend to remotely control my Pi!

Machine Learning

Machine learning requires large datasets. By utilizing WebTransport you can send user data to your server in real-time to get better insights on your machine learning models. Take for example you are building a recommendation engine. As the user browses the site, you are sending data in real time based on what they are looking at, etc. The faster you can collect and analyze data, the more profitable your company can become.

Pub/Sub

Using the WebTransport API you can implement a pub/sub system. The simplest use case would be a notification engine (think game HUD updates in multiplayer). You can also do things like implement real-time tickers instead of relying on long-polling techniques.

Why I Created ShakePort

I created ShakePort as a basis for my real-time apps and more importantly, I’m building a game and need multiplayer networking. I decided at the beginning of the year I would post on average 1 open source package a month. So far I’m at 4! My philosophy is if you find yourself doing certain code over and over, just package that sh!t up and release it to the world. Chances are there are other developers who you are helping OR you could find devs to help make your package even better! The ShakePort suite is made up of a client (ShakePort Client) and a server (ShakePort Server) this tutorial will focus on the ShakePort Client, I will post the ShakePort Server tutorial later in the year once I finish.

The Code

This package is actually very simple, it only consists of two files:

  • A WebWorker file
  • The ShakePortClient class

Almost all of the heavy work is offloaded to the WebWorker to optimize speed and performance and utilizes window.postMessage() to send data to the main application. This way the developer has custom control on how to deal with the datagrams.

Scaffolding

Create a new directory and run the npm init command to create a new NPM package
mkdir shakeport-client

cd shakeport-client && npm init

The WebWorker

Create a worker.js file in the root of the project and input the following:

let transport, stream = null
onmessage = (e) => {
  try {
    switch(e.data.event) {
      case 'start':
        transport = initTransport(e.data.url, e.data.options)
        postMessage({event:'start', transport:transport})
      break;

      case 'setup-bidirectional':
      stream = setUpBidirectional()
      readData(stream.readable)
      break;

      case 'write-bidirectional':
      writeData(stream.writable, e.data.data)
      break;

      case 'data':

      break;

      case 'stop':
        closeTransport(e.transport)
      break;
    }
  } catch {

    postMessage({event: 'error'});
  }
}
async function initTransport(url, options = {}) {
  // Initialize transport connection
  const transport = new WebTransport(url, options);

  // The connection can be used once ready fulfills
  await transport.ready;

  return transport
}

async function readData(readable) {
  const reader = readable.getReader();
  while (true) {
    const {value, done} = await reader.read();
    if (done) {
      break;
    }
    // value is a Uint8Array.
    postMessage({event: 'data-read', data:value});
  }
}

async function writeData(writable, data) {
  const writer = writable.getWriter();
  writer.write(data)
  postMessage({event: 'data-written', data: data})
}

async function setUpBidirectional() {
  const stream = await transport.createBidirectionalStream();
  // stream is a WebTransportBidirectionalStream
  // stream.readable is a ReadableStream
  // stream.writable is a WritableStream
  return stream
}

async function setUpUnidirectional() {
  const stream = await transport.createUnidirectionalStream();
  // stream is a WebTransportBidirectionalStream
  // stream.readable is a ReadableStream
  // stream.writable is a WritableStream
  return stream
}

async function closeTransport(transport) {
    // Respond to connection closing
  try {
    await transport.closed;
    console.log(`The HTTP/3 connection to ${url} closed gracefully.`);
  } catch(error) {
    console.error(`The HTTP/3 connection to ${url} closed due to ${error}.`);
  }
}

The ShakePortClient Class

Create a class file called ShakePortClient.js in the root directory and fill it in:


export default class ShakePortClient {
  transport = null
  constructor() {
    const worker = require('./worker.js')
    console.log(worker)
    this.worker = new Worker(worker)
  }

  startClient(data = {url, options: {}}) {
    this.worker.postMessage({event:'start', ...data})
    this.worker.onmessage = (event) => {
      switch(event.data.event) {
        case 'start':
          console.log('Started')
          this.transport = event.data.transport
        break;
        case 'stop':
          this.transport = null
        break;
        case 'error':
          console.log(event.data.error)
        break;
        default:
          window.postMessage(event.data)
        break;
      }
    }
  }

  stopClient () {
    this.worker.postMessage({event:'stop'})
  }


  setUpBidirectional () {
    this.worker.postMessage({event:'setup-bidirectional'})
  }

  setUpUnidirectional () {
    this.worker.postMessage({event:'setup-unidirectional'})
  }

  writeBidirectional (data) {
    this.worker.postMessage({event:'write-bidirectional', data: data})
  }

  writeData (data) {
    this.worker.postMessage({event:'write-data', data: data})
  }

  writeUndirectional (data) {
    this.worker.postMessage({event:'write-unidirectional', data: data})
  }
  
}

Using The Package

npm install @mastashake08/shakeport-client
Then import and use

Import In Your Project

import ShakePortClient from '@mastashake08/shakeport-client'
const spc = new ShakePortClient();
spc.startClient({
  url:'<webtransport_server_url>'
})

Responding To Messages

window.addEventListener("message", (event) => {
  // Do we trust the sender of this message?  (might be
  // different from what we originally opened, for example).
  if (event.origin !== "http://example.com")
    return;

  // event.source is popup
  // event.data is "hi there yourself!  the secret response is: rheeeeet!"
}, false);

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.

Posted on 4 Comments

Moving The ChatGPT Laravel API Into Composer – Creating My First Laravel 10 Package

I Created A Laravel 10 OpenAI ChatGPT Composer Package

In my last tutorial, I created a Laravel site that featured an OpenAI ChatGPT API. This was very fun to create and while I was thinking of ways to improve it, the idea dawned upon me to make it a Composer package and share it with the world. This took less time than I expected honestly and I already have integrated my package into a few applications (it feels good to composer require your own shit!).

What’s The Benefit?

1. Reusability

I know for a fact that I will be using OpenAI in a majority of my projects going forward, instead of rewriting functionality over and over and over, I’m going to package up the functionality that I know I will need every time.

2. Modularity

Breaking my code into modules allows me to think about my applications from a high-level view. I call it Lego Theory; all of my modules are legos and my app is the lego castle I’m building.

3. Discoverability

Publishing packages directly helps my brand via discoverability. If I produce high-quality, in-demand open-source software then more people will use it, and the more people that use it then the more people know who I am. This helps me when I am doing things like applying for contracts or conference talks.

Creating The Code

Scaffold From Spatie

The wonderful engineers at Spatie have created a package skeleton for Laravel and is what I used as a starting point for my Laravel package. If you are using GitHub you can use this repo as a template or if using the command line enter the following command:

git clone git@github.com:spatie/package-skeleton-laravel.git laravel-open-api

There is a configure.php script that will set all of the placeholder values with the values you provide for your package

php ./configure.php

Now we can get to the nitty gritty.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

The Front-Facing Object

After running the configure script you will have a main object that will be renamed, in my case it was called LaravelOpenaiApi.php and it looks like this:

<?php

namespace Mastashake\LaravelOpenaiApi;

use OpenAI\Laravel\Facades\OpenAI;
use Mastashake\LaravelOpenaiApi\Models\Prompt;

class LaravelOpenaiApi
{
  function generateResult(string $type, array $data): Prompt {
    switch ($type) {
      case 'text':
      return $this->generateText($data);
      case 'image':
      return $this->generateImage($data);
    }
  }

  function generateText($data) {
    $result = OpenAI::completions()->create($data);
    return $this->savePrompt($result, $data);
  }

  function generateImage($data) {
    $result = OpenAI::images()->create($data);
    return $this->savePrompt($result, $data);
  }

  private function savePrompt($result, $data): Prompt {
    $prompt = new Prompt([
      'prompt_text' => $data['prompt'],
      'data' => $result
    ]);
    return $prompt;
  }
}

It can generate text and images and save the prompts, it looks at the type provided to determine what resource to generate. It’s all powered by the OpenAI Laravel Facade.

The Migration

The default migration will be edited to use the prompts migration from the Laravel API tutorial, open it up and replace the contents with the following:

<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    /**
     * Run the migrations.
     */
    public function up(): void
    {
        Schema::create('prompts', function (Blueprint $table) {
            $table->id();
            $table->string('prompt_text');
            $table->json('data');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     */
    public function down(): void
    {
        Schema::dropIfExists('prompts');
    }
};

The Model

Create a file called src/Models/Prompt.php and copy the old Prompt code inside

<?php

namespace Mastashake\LaravelOpenaiApi\Models;

use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;

class Prompt extends Model
{
    use HasFactory;

    protected $guarded = [];
    protected $casts = [
      'data' => 'array'
    ];
}

The Controller

For the controllers, we have to create a BaseController and a PromptController. Create a file called src/Http/Controllers/BaseController.php

<?php

namespace Mastashake\LaravelOpenaiApi\Http\Controllers;

use Illuminate\Foundation\Bus\DispatchesJobs;
use Illuminate\Routing\Controller as BaseController;
use Illuminate\Foundation\Validation\ValidatesRequests;
use Illuminate\Foundation\Auth\Access\AuthorizesRequests;

class Controller extends BaseController
{
    use AuthorizesRequests, DispatchesJobs, ValidatesRequests;
}

Now we will create our PromptController and inherit from the BaseController

<?php

namespace Mastashake\LaravelOpenaiApi\Http\Controllers;
use Illuminate\Http\Request;
use Mastashake\LaravelOpenaiApi\LaravelOpenaiApi;
class PromptController extends Controller
{
    //
    function generateResult(Request $request) {

      $ai = new LaravelOpenaiApi();
      $prompt = $ai->generateResult($request->type, $request->except(['type']));
      return response()->json([
        'data' => $prompt
      ]);
    }
}

OpenAI and ChatGPT can generate multiple types of responses, so we want the user to be able to choose which type of resource they want to generate then pass on that data to the underlying engine.

The Route

Create a routes/api.php file to store our api route:

<?php

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;

/*
|--------------------------------------------------------------------------
| API Routes
|--------------------------------------------------------------------------
|
| Here is where you can register API routes for your application. These
| routes are loaded by the RouteServiceProvider and all of them will
| be assigned to the "api" middleware group. Make something great!
|
*/
Route::group(['prefix' => '/api'], function(){
  if(config('openai.use_sanctum') == true){
    Route::middleware(['api','auth:sanctum'])->post(config('openai.api_url'),'Mastashake\LaravelOpenaiApi\Http\Controllers\PromptController@generateResult');
  } else {
    Route::post(config('openai.api_url'),'Mastashake\LaravelOpenaiApi\Http\Controllers\PromptController@generateResult');
  }
});

Depending on the values in the config file (we will get to it in a second calm down) the user may want to use Laravel Sanctum for token-based authenticated requests. In fact, I highly suggest you do if you don’t want your token usage abused, but for development and testing, I suppose it’s fine. I made it this way to make it more robust and extensible.

The Config File

Create a file called config/openai.php that will hold the default config values for the package. This will be published into any application that you install this package in:

<?php

return [

    /*
    |--------------------------------------------------------------------------
    | OpenAI API Key and Organization
    |--------------------------------------------------------------------------
    |
    | Here you may specify your OpenAI API Key and organization. This will be
    | used to authenticate with the OpenAI API - you can find your API key
    | and organization on your OpenAI dashboard, at https://openai.com.
    */

    'api_key' => env('OPENAI_API_KEY'),
    'organization' => env('OPENAI_ORGANIZATION'),
    'api_url' => env('OPENAI_API_URL') !== null ? env('OPENAI_API_URL') : '/generate-result',
    'use_sanctum' => env('OPENAI_USE_SANCTUM') !== null ? env('OPENAI_USE_SANCTUM') == true : false

];
  • The api_key variable is the OpenAI API key
  • The organization variable is the OpenAI organization if one exists
  • The api_url variable is the user-defined URL for the API routes, if one is not defined then use /api/generate-result
    • The use_sanctum variable defines if the API will use auth:sanctum middleware.

The Command

The package includes an artisan command for generating results from the command line. Create a file called src/Commands/LaravelOpenaiApiCommand.php

<?php

namespace Mastashake\LaravelOpenaiApi\Commands;

use Illuminate\Console\Command;
use Mastashake\LaravelOpenaiApi\LaravelOpenaiApi;
class LaravelOpenaiApiCommand extends Command
{
    public $signature = 'laravel-openai-api:generate-result';

    public $description = 'Generate Result';

    public function handle(): int
    {
        $data = [];
        $suffix = null;
        $n = 1;
        $temperature = 1;
        $displayJson = false;
        $max_tokens = 16;
        $type = $this->choice(
            'What are you generating?',
            ['text', 'image'],
            0
        );
        $prompt = $this->ask('Enter the prompt');
        $data['prompt'] = $prompt;

        if($type == 'text') {
          $model = $this->choice(
              'What model do you want to use?',
              ['text-davinci-003', 'text-curie-001', 'text-babbage-001', 'text-ada-001'],
              0
          );
          $data['model'] = $model;
          if ($this->confirm('Do you wish to add a suffix to the generated result?')) {
              //
              $suffix = $this->ask('What is the suffix?');
          }
          $data['suffix'] = $suffix;

          if ($this->confirm('Do you wish to set the max tokens used(defaults to 16)?')) {
            $max_tokens = (int)$this->ask('Max number of tokens to use?');
          }
          $data['max_tokens'] = $max_tokens;

          if ($this->confirm('Change temperature')) {
            $temperature = (float)$this->ask('What is the temperature(0-2)?');
            $data['temperature'] = $temperature;
          }
        }

        if ($this->confirm('Multiple results?')) {
          $n = (int)$this->ask('Number of results?');
          $data['n'] = $n;
        }

        $displayJson = $this->confirm('Display JSON results?');

        $ai = new LaravelOpenaiApi();
        $result = $ai->generateResult($type,$data);

        if ($displayJson) {
          $this->comment($result);
        }
        if($type == 'text') {
          $choices = $result->data['choices'];
          foreach($choices as $choice) {
            $this->comment($choice['text']);
          }
        } else {
          $images = $result->data['data'];
          foreach($images as $image) {
            $this->comment($image['url']);
          }
        }


        return self::SUCCESS;
    }
}

I’m going to add more inputs later, but for now, this is a good starting point to get back data. I tried to make it as verbose as possible, I’m always welcoming PRs if you want to add functionality 🙂

The Service Provider

All Laravel packages must have a service provider, open up the default one in the root directory, in my case it was called LaravelOpenaiApiServiceProvider

<?php

namespace Mastashake\LaravelOpenaiApi;

use Spatie\LaravelPackageTools\Package;
use Spatie\LaravelPackageTools\PackageServiceProvider;
use Mastashake\LaravelOpenaiApi\Commands\LaravelOpenaiApiCommand;
use Spatie\LaravelPackageTools\Commands\InstallCommand;
class LaravelOpenaiApiServiceProvider extends PackageServiceProvider
{
    public function configurePackage(Package $package): void
    {
        /*
         * This class is a Package Service Provider
         *
         * More info: https://github.com/spatie/laravel-package-tools
         */
        $package
            ->name('laravel-openai-api')
            ->hasConfigFile(['openai'])
            ->hasRoute('api')
            ->hasMigration('create_openai_api_table')
            ->hasCommand(LaravelOpenaiApiCommand::class)
            ->hasInstallCommand(function(InstallCommand $command) {
                $command
                    ->publishConfigFile()
                    ->publishMigrations()
                    ->askToRunMigrations()
                    ->copyAndRegisterServiceProviderInApp()
                    ->askToStarRepoOnGitHub('mastashake08/laravel-openai-api');
                  }
                );
    }
}

The name is the name of our package, next we pass in the config file created above. Of course we have to add our API routes and migrations. Lastly, we add our commands.

Testing It In Laravel Project

composer require mastashake08/laravel-openai-api

You can run that command in any Laravel project, I used it in the Laravel API tutorial I did last week. If you runphp artisan route:list and you will see the API is in your project!

Hey look mom it’s my package!!!

Check The Repo!

This is actually my first-ever Composer package! I would love feedback, stars, and PRs that would go a long way. You can check out the repo here on GitHub. Please let me know in the comments if this tutorial was helpful and share on social media.

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.

Posted on Leave a comment

Creating A Laravel ChatGPT API

laravel + chatgpt

What Is ChatGPT?

ChatGPT is a generative AI software created by OpenAI from the official Wikipedia page

ChatGPT (Chat Generative Pre-trained Transformer[2]) is a chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models and has been fine-tuned (an approach to transfer learning) using both supervised and reinforcement learning techniques.

https://en.wikipedia.org/wiki/ChatGPT

Since its launch, it has taken the world by storm. People are losing their minds over the impending AI overlords destroying society and making humanity its slave 😂

Seriously though we are talking BILLIONS of weights across MILLIONS of neurons I geek out every time I think about it. I keep getting asked what my thoughts are on ChatGPT and instead of repeating myself for the 99934394398th time, I decided to write a blog and do a Laravel code tutorial to show developers just HOW EASY it is to use this OpenAI software.

Use Cases For ChatGPT

Asset Generation

With ChatGPT you can generate images This was first made popular with the DALL-E project. AI-generated art is on the rise and there are many opportunities to be had. Images aren’t the end though, with ChatGPT you can generate any asset. Think about how this will affect gaming! You can generate 3D assets, audio assets, texture assets, and more.

Code Generation

In my opinion, this is where ChatGPT shines. The Codex project allows you to use ChatGPT to generate code, and the results are scarily amazing. If you are a solo developer you can leverage the power of artificial intelligence to speed run through proof of concepts. I have seen videos of people programming whole apps with ChatGPT

Text Generation

Using ChatGPT you can generate text. Many companies are integrating ChatGPT to create contextual accurate text responses. One of my favorite integrations of this is the Twitter bot ChatGPTBot. However, some people are scared of this technology such as the Rabbi who used AI to create a sermon. I personally think e-commerce will be dominated by AI-driven product descriptions.

The Sky Is The Limit

Microsoft has already integrated ChatGPT into Bing and Google is working on a rival application called Bard. The beautiful thing about this technology is that it is only limited to our imagination. We don’t even know the full scope of what ChatGPT is capable of. This is a perfect storm for people to get first movers advantage on this gold rush that is coming over the next decade.

How I Plan On Using ChatGPT

Mobisnacks

Download Mobisnacks!

I have integrated ChatGPT into MobiSnacks to create product descriptions for chefs. The chefs can put in keywords and ChatGPT spits out 3 descriptions for the chefs to use as a starting point. The next step is to use ChatGPT to generate contextual ads for the platform and for the chefs as an additional service.

GPT Audiobook

GPT Audiobook logo

I created a proof of concept called GPT Audiobook. It uses ChatGPT to create audiobooks and spits them out as SSML documents for text-to-speech software to read. I’m currently creating an Android and iOS app to go with the web app. In the future, I plan on adding rich structured data snippets to display the books on Google and other search engines. Even the logo for GPT Audiobook was MADE WITH CHATGPT!

The Laravel ChatGPT API

Overview

The Laravel API will be very simple: one route, one model, and one controller. The model will be called Prompt a prompt will have two fields, prompt_text and data. The controller will have one method called generateResult that will use the OpenAI SDK to communicate with ChatGPT and generate the result. Finally, there will be a POST route called /generate-result which saves the model and returns the JSON.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

Creating The Application

For this tutorial, I am using a Mac with docker. To start open up the tutorial and create a new Laravel application

curl -s "https://laravel.build/laravel-chat-gpt-api" | bash

Afterward cd into the application and add the OpenAI Laravel package, which will power our ChatGPT logic.

composer require openai-php/laravel

This is the only composer requirement we will need for this tutorial. Now we need to do our configuration for OpenAI.

Configuring The OpenAI SDK

The OpenAI Laravel package comes with some config files that need to be published before we get started coding. In your terminal paste the following command

php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"

This will create a config/openai.php configuration file in your project, which you can modify to your needs using environment variables: You need to retrieve your OpenAI developer key from here and paste it in your .env file.

OPENAI_API_KEY=sk-...

Ok, that’s it for the SDK configuration.

Database & Model

The Prompt model will have a prompt_text field that will hold the text entered by the user. It will also have a data json field that holds the result from OpenAI. Let’s create the model and the migration all in one:

./vendor/bin/sail artisan make:model -m Prompt

Open up the created migration and paste in the following:

<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    /**
     * Run the migrations.
     */
    public function up(): void
    {
        Schema::create('prompts', function (Blueprint $table) {
            $table->id();
            $table->string('prompt_text');
            $table->json('data');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     */
    public function down(): void
    {
        Schema::dropIfExists('prompts');
    }
};
 

Next open up the Prompt model and paste the following:

<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;

class Prompt extends Model
{
    use HasFactory;

    protected $guarded = [];
    protected $casts = [
      'data' => 'array'
    ];
}

Next, let’s move on to creating the controller.

Creating The Controller

Generate the PromptController in the terminal:

 ./vendor/bin/sail artisan make:controller PromptController 

Open it up and let’s create our generateResult function:

<?php

namespace App\Http\Controllers;
use OpenAI\Laravel\Facades\OpenAI;
use App\Models\Prompt;
use Illuminate\Http\Request;

class PromptController extends Controller
{
    //
    function generateResult(Request $request) {

      $result = OpenAI::completions()->create($request->all());
      $prompt = new Prompt([
        'prompt_text' => $request->prompt,
        'data' => $result
      ]);
      return response()->json($prompt);
    }
}

So what’s going on here? We import the OpenAI SDK and we simply pass the $request to the completions API. If you need a reference you can check the OpenAI API reference. We then create a new prompt model pass in the text and pass in the resulting data. The last thing to do is create the route and we are done!

Creating The API Route

Open up the routes/api.php routes file and update it to call the PromptController@generateResult function

<?php

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;

/*
|--------------------------------------------------------------------------
| API Routes
|--------------------------------------------------------------------------
|
| Here is where you can register API routes for your application. These
| routes are loaded by the RouteServiceProvider and all of them will
| be assigned to the "api" middleware group. Make something great!
|
*/

Route::middleware('auth:sanctum')->get('/user', function (Request $request) {
    return $request->user();
});

Route::post('/generate-result','App\Http\Controllers\PromptController@generateResult');

Now we are done, make sure you plug in your API key and make a test request! Here is how we can test with cURL:


curl -X POST http://localhost/api/generate-result -H "Content-Type: application/json" --data-binary @- <<DATA
{
  "model": "text-davinci-003",
   "prompt" : "PHP is"
}
DATA

Next Steps

The next step that I want to do with this project is to create it as a Laravel package so developers can put an OpenAI ChatGPT API in their backends easily. Afterward, I would like to add functionality for issuing tokens and possibly even a monetization module powered by Stripe and Laravel Cashier. Please leave comments on this article and let me know what you would like to see and I will build it! You can see the GitHub repository here.

**UPDATE** I Created The Laravel Composer Package

Shortly after writing this tutorial, I went ahead and created a Composer package for the Laravel OpenAI ChatGPT API. If you want to implement this functionality from the tutorial and more then please check it out! I’m actively looking for PRs from fellow developers! I can’t wait to see how you all use and integrate this package into your web applications and business services!

You can install the package using the following command:

composer require mastashake08/laravel-openai-api 

Afterward you can publish the migrations and config files with the following commands:

php artisan vendor:publish --tag="openai-api-migrations"
php artisan migrate
php artisan vendor:publish --tag="openai-api-config"

Finally, start to use it in your code! You can access the object directly, via the included API routes, or with the interactive Artisan CLI command.

Via Code

$laravelOpenaiApi = new Mastashake\LaravelOpenaiApi();
echo $laravelOpenaiApi->generateResult($type, $data);

Via Artisan

php artisan laravel-openai-api:generate-result

Via API

You set the OPENAI_API_URL in the .env file if a value is not set then it defaults to /api/generate-result

/api/generate-result POST {openai_data}

The data object requires a type property that is either set to text or image. Depending on which type then provide the JSON referenced in the OpenAI API Reference

Text Example

{
  "type": "text",
  "prompt": "Rust is",
  "n": 1,
  "model": "text-davinci-003",
  "max_tokens": 16
}

Image Example

{
  "type": "image",
  "prompt": "A cute baby sea otter",
  "n": 1,
  "size": "1024x1024"
}

I’m going to continue to work on both this package and the demo tutorial and will update you all on the progress for sure. Thank you for taking time to read this tutorial, if you found it helpful please leave a comment and a like!

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.

Posted on 2 Comments

SpeechKit: A Javascript Package For The Web Speech API (Speech Synthesis & Speech Recognition)

Speech Recognition & Speech Synthesis In The Browser With Web Speech API

Voice apps are now first-class citizens on the web thanks to the Speech Recognition and the Speech Synthesis interfaces which are a part of the bigger Web Speech API. Taken from the MDN docs

The Web Speech API makes web apps able to handle voice data. There are two components to this API:

  • Speech recognition is accessed via the SpeechRecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device’s default speech recognition service) and respond appropriately. Generally you’ll use the interface’s constructor to create a new SpeechRecognition object, which has a number of event handlers available for detecting when speech is input through the device’s microphone. The SpeechGrammar interface represents a container for a particular set of grammar that your app should recognize. Grammar is defined using JSpeech Grammar Format (JSGF.)
  • Speech synthesis is accessed via the SpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device’s default speech synthesizer.) Different voice types are represented by SpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance objects. You can get these spoken by passing them to the SpeechSynthesis.speak() method.
Brief on Web Speech API from MDN

So basically with the Web Speech API you can work with voice data. You can make your apps speak to its users and you can run commands based on what your user speaks. This opens up a host of opportunities for voice-activated CLIENT-SIDE apps. I love building open-source software, so I decided to create an NPM package to work with the Web Speech API called SpeechKit and I couldn’t wait to share it with you! I suppose this is a continuation of Creating A Voice Powered Note App Using Web Speech

Simplifying The Process With SpeechKit

I decided starting this year I would contribute more to the open-source community and provide packages (primarily Javascript, PHP, and Rust) to the world to use. I use the Web Speech API a lot in my personal projects and so why not make it an NPM package? You can find the source code here.

Listen To Some Hacker Music While You Code

Follow me on Spotify I make Tech Trap music

Features

  • Speak Commands
  • Listen for voice commands
  • Add your own grammar
  • Transcribe words and output as file.
  • Generate SSML from text
npm install @mastashake08/speech-kit

Import

import SpeechKit from '@mastashake08/speech-kit'

Instantiate A New Instance

new SpeechKit(options)

listen()

Start listening for speech recognition.

stopListen()

Stop listening for speech recognition.

speak(text)

Use Speech Synthesis to speak text.

Param Type Description
text string Text to be spoken

getResultList() ⇒ SpeechRecognitionResultList

Get current SpeechRecognition resultsList.

Returns: SpeechRecognitionResultList – – List of Speech Recognition results

getText() ⇒ string

Return text

Returns: string – resultList as text string

getTextAsFile() ⇒ Blob

Return text file with results.

Returns: Blob – transcript

getTextAsJson() ⇒ object

Return text as JSON.

Returns: object – transcript

addGrammarFromUri()

Add grammar to the SpeechGrammarList from a URI.

Params: string uri – URI that contains grammar

addGrammarFromString()

Add grammar to the SpeechGrammarList from a Grammar String.

Params: string grammar – String containing grammar

getGrammarList() ⇒ SpeechGrammarList

Return current SpeechGrammarList.

Returns: SpeechGrammarList – current SpeechGrammarList object

getRecognition() ⇒ SpeechRecognition

Return the urrent SpeechRecognition object.

Returns: SpeechRecognition – current SpeechRecognition object

getSynth() ⇒ SpeechSynthesis

Return the current Speech Synthesis object.

Returns: SpeechSynthesis – current instance of Speech Synthesis object

getVoices() ⇒ Array<SpeechSynthesisVoice>

Return the current voices available to the user.

Returns: Array<SpeechSynthesisVoice> – Array of available Speech Synthesis Voices

setSpeechText()

Set the SpeechSynthesisUtterance object with the text that is meant to be spoken.

Params: string text – Text to be spoken

setSpeechVoice()

Set the SpeechSynthesisVoice object with the desired voice.

Params: SpeechSynthesisVoice voice – Voice to be spoken

getCurrentVoice() ⇒ SpeechSynthesisVoice

Return the current voice being used in the utterance.

Returns: SpeechSynthesisVoice – current voice

Example Application

In this example vue.js application there will be a text box with three buttons underneath, when the user clicks the listen button, SpeechKit will start listening to the user. As speech is detected, the text will appear in the text box. The first button under the textbox will tell the browser to share the page, the second button will speak the text in the textbox while the third button will control recording.

Home page from the github.io page

I created this in Vue.js and (for sake of time and laziness) I reused all of the defaul components and rewrote the HelloWorld component. So let’s get started by creating a new Vue application.

Creating The Application

Open up your terminal and input the following command to create a new vue application:

vue create speech-kit-demo

It doesn’t really matter what settings you choose, after you get that squared away, now it is time to add our dependecy.

Installing SpeechKit

Still inside your terminal we will add the SpeechKit dependency to our package.json file with the following command:

npm install @mastashake08/speech-kit

Now with that out of the way we can begin creating our component functionality.

Editing HelloWorld.vue

Open up your HelloWorld.vue file in your components/ folder and change it to look like this:

<template>
  <div class="hello">
    <h1>{{ msg }}</h1>
    <p>
      Simple demo to demonstrate the Web Speech API using the
      <a href="https://github.com/@mastashake08/speech-kit" target="_blank" rel="noopener">SpeechKit npm package</a>!
    </p>
    <textarea v-model="voiceText"/>
    <ul>
      <button @click="share" >Share</button>
      <button @click="speak">Speak</button>
      <button @click="listen" v-if="!isListen">Listen</button>
      <button @click="stopListen" v-else>Stop Listen</button>
    </ul>
  </div>
</template>

<script>
import SpeechKit from '@mastashake08/speech-kit'
export default {
  name: 'HelloWorld',
  props: {
    msg: String
  },
  mounted () {
    this.sk = new SpeechKit({rate: 0.85})
    document.addEventListener('onspeechkitresult', (e) => this.getText(e))
  },
  data () {
    return {
      voiceText: 'SPEAK ME',
      sk: {},
      isListen: false
    }
  },
  methods: {
    share () {
      const text = `Check out the SpeechKit Demo and speak this text! ${this.voiceText} ${document.URL}`
      try {
        if (!navigator.canShare) {
          this.clipBoard(text)
        } else {
          navigator.share({
            text: text,
            url: document.URL
          })
        }
      } catch (e) {
        this.clipBoard(text)
      }
    },
    async clipBoard (text) {
      const type = "text/plain";
      const blob = new Blob([text], { type });

      const data = [new window.ClipboardItem({ [type]: blob })];
      await navigator.clipboard.write(data)
      alert ('Text copied to clipboard')
    },
    speak () {
      this.sk.speak(this.voiceText)
    },
    listen () {
      this.sk.listen()
      this.isListen = !this.isListen
    },
    stopListen () {
      this.sk.stopListen()
      this.isListen = !this.isListen
    },
    getText (evt) {
      this.voiceText = evt.detail.transcript
    }
  }
}
</script>

<!-- Add "scoped" attribute to limit CSS to this component only -->
<style scoped>
h3 {
  margin: 40px 0 0;
}
ul {
  list-style-type: none;
  padding: 0;
}
li {
  display: inline-block;
  margin: 0 10px;
}
a {
  color: #42b983;
}
</style>

As you can see the almost all of the functionality is being offloaded to the SpeechKit library. You can see a live version of this at https://mastashake08.github.io/speech-kit-demo/ . In the mount() method we initialize our SpeechKit instance and add an event listener on the document to listen for the onspeechkitresult event emitted from the SpeechKit class which dispatches everytime there is an availble transcript from speech recognition. The listen() and stopListen() functions simply call the SpeechKit functions and toggle a boolean indicating recording is in process. Finally the share() function uses the Web Share API to share the URL if available, otherwise it defaults to using the Clipboard API and copying the text to the user’s clipboard for manual sharing.

Want To See More Tutorials?

Join my newsletter and get weekly updates from my blog delivered straight to your inbox.

Check The Shop!

Consider purchasing an item from the #CodeLife shop, all proceeds go towards our coding initiatives.

Follow Me On Social Media

Follow Me On Youtube!

Follow my YouTube account

Get Your Next Domain Cheap & Support The Channel

I use Namecheap for all of my domains! Whenever I need a cheap solution for a proof-of-concept project I grab a domain name for as little as $1! When you sign up and buy your first domain with Namecheap I get a commission, it’s a great way to get a quality service and support this platform!

Get Your Next Domain Cheap
CLICK HERE

Become A Sponsor

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Join The Newsletter

By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!

Check The Shop

You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.