I love writing Discord bots, my favorite one was the Discord Twitter Bot. Now that ChatGPT API is available to developers I decided to create an OpenAI ChatGPT discord bot in Node.js and share the source code with you. This bot adds a slash command to your server called /generate-prompt that takes in a prompt string and the bot returns a result using the Text Completion API.
MastaGPT Bot in action
The Source Code
This is a dockerized node.js application that uses GitHub actions to deploy the Docker container to Docker Hub and GitHub Container Registry. The index.js file loads an instance of OpenAI and Discord.js, loads the slash commands from a commands directory and registers them with Discord. It then listens for interactions i.e. a user using the slash command and then calls the generate method to use the gpt-3.5-turbo OpenAI language model to generate a response and reply to that message in Discord.
Listen To Some Hacker Music While You Code
Follow me on Spotify I make Tech Trap music
Package.json
Below is an example of how you might want your package.json file to look like.
{
"name": "discord-gpt-bot",
"version": "1.1.0",
"description": "Add ChatGPT to your Discord server. Responds with a ChatGPT generated text when @.",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"repository": {
"type": "git",
"url": "git+https://github.com/mastashake08/discord-gpt-bot.git"
},
"keywords": [
"OpenAI",
"ChatGPT",
"gpt",
"discord",
"bots",
"discord"
],
"author": "Mastashake08",
"license": "GPL3",
"bugs": {
"url": "https://github.com/mastashake08/discord-gpt-bot/issues"
},
"homepage": "https://github.com/mastashake08/discord-gpt-bot#readme",
"dependencies": {
"discord.js": "^14.7.1",
"dotenv": "^16.0.3",
"openai": "^3.2.1"
}
}
Index File
This is where most of the magic happens, the index.js file loads our slash commands starts OpenAI, and starts the Discord.js instance. All secret keys and tokens are loaded using the dotenv package from a .env file. The generate function makes a call to the OpenAI.createCompletion() function which returns our text completion.
require('dotenv').config()
const { Client, Events, Collection, REST, Routes } = require('discord.js');
const fs = require('node:fs');
const path = require('node:path');
const { Configuration, OpenAIApi } = require("openai")
const client = new Client({ intents: 2048 })
client.commands = new Collection()
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
async function generate(prompt, model="gpt-3.5-turbo") {
const completion = await openai.createCompletion({
model: model,
prompt: prompt
});
const text = completion.data.choices[0].text
return text;
}
// start the discord client and listen for messages
const commandsPath = path.join(__dirname, 'commands');
const commandFiles = fs.readdirSync(commandsPath).filter(file => file.endsWith('.js'));
const commands = []
// Grab the SlashCommandBuilder#toJSON() output of each command's data for deployment
for (const file of commandFiles) {
const command = require(`./commands/${file}`);
commands.push(command.data.toJSON());
}
// Construct and prepare an instance of the REST module
const rest = new REST({ version: '10' }).setToken(process.env.DISCORD_TOKEN);
// and deploy your commands!
(async () => {
try {
console.log(`Started refreshing ${commands.length} application (/) commands.`);
// The put method is used to fully refresh all commands in the guild with the current set
const data = await rest.put(
Routes.applicationGuildCommands(process.env.DISCORD_CLIENT_ID, process.env.DISCORD_GUILD_ID),
{ body: commands },
);
console.log(`Successfully reloaded ${data.length} application (/) commands.`);
} catch (error) {
// And of course, make sure you catch and log any errors!
console.error(error);
}
})();
client.login(process.env.DISCORD_TOKEN)
client.on(Events.InteractionCreate, async interaction => {
if (interaction.commandName === 'generate-prompt') {
const res = await generate(interaction.options.getString('prompt'))
await interaction.reply({ content: res });
}
});
Commands
I created a commands directory and inside created a prompt.js file. This file is responsible for using the SlashCommandBuilder class from DIscord.js to create our command and options.
I created a Dockerfile so that anyone can run this application without having to build from source. It creates a node 16 image, copies the code files over, runs npm install , then runs the command. By passing in an --env-file flag to the docker run command will pass in a .env file to the script.
Automating the build process is the final part of the project. Whenever I release a new tagged version of the code, the GitHub action packages the Docker Image and publishes it to Docker Hub as well as Github Container Registry. From there I either run the docker image locally on my Raspberry Pi or I will run it in the cloud.
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.
name: Publish Docker image to Docker Hub
on:
release:
types: [published]
jobs:
push_to_registry:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ github.repository }}
- name: Build and push Docker image
uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# GitHub recommends pinning actions to a commit SHA.
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.
name: Create and publish a Docker image to Github Packages
on:
release:
types: [published]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
Usage
Via Node
cp .env.example .env
# Set variables
DISCORD_TOKEN=
DISCORD_CHANNEL_ID=
DISCORD_CLIENT_ID=
DISCORD_GUILD_ID=
OPENAI_API_KEY=
# Call program
node index.js
Via Docker
docker run --env-file=<PATH TO .ENV> -d --name=<NAME> mastashake08/discord-gpt-bot:latest
Jyrone Parker is an American software engineer and entrepreneur from Louisville, KY. He is the owner and CEO of J Computer Solutions LLC, a full-scale IT agency that deals with hardware, networking, and software development. He is also a tech rapper and producer that goes by the name Mastashake.
Follow Me On Youtube!
Follow my YouTube account
Become A Sponsor
Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.
Join The Newsletter
By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!
Check The Shop
You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.
IoT has been increasing in relevance over the past decade. The idea of connecting physical devices to a website has always intrigued me. Bringing the physical world into the digital in my opinion brings us closer together globally.
Our future
In this tutorial, I will go over how to build a WebIOT package so you can add IoT capabilities to your Javascript applications. Please like, comment, and share this article, it really helps.
Introducing the WebIoT NPM Package
Following my FOSS commitment for 2023 (check out SpeechKit and Laravel OpenAI API) my March package is called WebIOT it brings together a collection of Web APIs to easily give developers functions for interacting with IoT devices. It has classes for NFC, Bluetooth, Serial and USB. I am actively looking for PRs and constructive criticism.
Web NFC
The Web NFC API is a cool API that allows for interaction with NFC chips. It consists of a message, a reader and a record.
The Web NFC API allows exchanging data over NFC via light-weight NFC Data Exchange Format (NDEF) messages.
You can get NFC chips really cheap on Amazon (use this link and I get a commission !) You can use NFC for all sorts of cool interactive things.
Extend offline activity: NFC is an offline technology, it doesn’t need to be connected to a network in order to exchange data, it gets its electricity from the close contact radio wave exchange hitting the wire (yay physics). A cool implementation is adding real-world nodes for your web game, when users tap it they get a special prize.
IoT device configurations: You can have users on your website get configuration data for your IoT devices without them having to download any additional software. This is extremely useful when paired with the Web Bluetooth API for GATT server configs.
Sending data to devices: NFC is a secure way to write data to your IoT devices and let those handle the processing
Web Bluetooth
The Web Bluetooth API provides the ability to connect and interact with Bluetooth Low Energy peripherals.
The Web Bluetooth API allows developers to connect to Bluetooth LE devices and read and write data. Some useful implementations of Web Bluetooth
Get local device updates
Run webpage functionality based on device state i.e. a heart monitor make an animation run on certain BPM
Web Serial
The Web Serial API provides a way for websites to read from and write to serial devices. These devices may be connected via a serial port, or be USB or Bluetooth devices that emulate a serial port.
Web Serial allows us to connect to generic serial ports and interact with our devices. This means we can do things like connect our webpages to embedded devices such as a Raspberry Pi.
Web USB
The WebUSB API provides a way to expose non-standard Universal Serial Bus (USB) compatible devices services to the web, to make USB safer and easier to use.
The Web USB API allows us to work directly with USB peripherals and by extension if those devices has programs, run them.
Current Browser Limitations
Most of these APIs cannot be used on iOS currently.
Listen To Some Hacker Music While You Code
Follow me on Spotify I make Tech Trap music
The Source Code
Init The Project
Creating a new project and running the npm create script
mkdir web-iot && cd web-iot
npm init
The package.json looks like this:
{
"name": "@mastashake08/web-iot",
"version": "1.0.0",
"description": "Connect to your IoT devices via usb, serial, NFC or Bluetooth",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"repository": {
"type": "git",
"url": "git+https://github.com/mastashake08/web-iot.git"
},
"keywords": [
"iot",
"webserial",
"bluetooth",
"webusb",
"nfc"
],
"author": "Mastashake",
"license": "MIT",
"bugs": {
"url": "https://github.com/mastashake08/web-iot/issues"
},
"homepage": "https://github.com/mastashake08/web-iot#readme"
}
The index.js file
This is the entry point for the package and it simply exports our classes
import { WebIOT } from './classes/WebIOT'
import { NFCManager } from './classes/NFCManager'
import { BluetoothManager } from './classes/BluetoothManager'
import { SerialManager } from './classes/SerialManager'
import { USBManager } from './classes/USBManager'
export {
WebIOT,
NFCManager,
BluetoothManager,
SerialManager,
USBManager
}
The WebIOT Class
This is the base class for all our managers. It contains some functions for sending data to a remote server, a functionality that’s usually needed when dealing with IoT devices.
import { WebIOT } from './WebIOT'
export class NFCManager extends WebIOT {
#nfc = {}
constructor (debug = false) {
super(debug)
if ('NDEFReader' in window) { /* Scan and write NFC tags */
this.nfc = new NDEFReader()
} else {
alert('NFC is not supported in your browser')
}
}
startNFC () {
this.nfc = new NDEFReader()
}
async readNFCData (readCb, errorCb = (event) => console.log(event)) {
this.nfc.onreading = readCb()
await this.nfc.scan()
}
async writeNFCData (records, errorCb = (event) => console.log(event)) {
try {
await this.nfc.write(records)
} catch (e) {
errorCb(e)
}
}
async lockNFCTag(errorCb = (event) => console.log(event)) {
try {
await this.nfc.makeReadOnly()
} catch(e) {
errorCb(e)
}
}
static generateNFC () {
return new NDEFReader()
}
}
Using It In Action
USB Example
Select a device and read/write data to pin 4
import { USBManager } from '@mastashake08/web-iot'
....
const usb = new USBManager()
// get devices
const devices = usb.getDevices()
// request a single device
const device = usb.requestDevice()
// open device after connecting to it
device = usb.openDevice()
// read 64 bytes of data from pin 4 on device
const readData = usb.readData(4, 64)
// write 64 bytes of data to pin 4
usb.writeData(4, new Uint8Array(64))
Serial Example
Have a user select a serial device and write 64 bytes of data to it
import { SerialManager } from '@mastashake08/web-iot'
....
const serial = new SerialManager()
// get a port
port = serial.requestPort()
// read data
const data = serial.readData()
// write 64 bytes data
serial.writeData(new Uint8Array(64))
Bluetooth Example
Let a user select a Bluetooth device and get the battery level
import { BluetoothManager } from '@mastashake08/web-iot'
....
const bt = new BluetoothManager()
// get a device
const device = bt.requestDevice(options)
// get services
const services = bt.getServices()
// get battery service
const service = bt.getService('battery_service')
// get batter level
const char = bt.getCharacteristic('battery_level')
// get battery value
const battery = bt.getValue()
//write value
bt.writeValue(new Uint8Array(64))
NFC Example
Read, Write, and lock tags
import { NFCManager } from '@mastashake08/web-iot'
....
// start NFC
const nfc = new NFCManager()
nfc.startNFC()
//Read a tag
const data = nfc.readNFCData(successCb, errorCb)
const writeData = "Hello World"
// Write to a tag
nfc.writeNFCData(writeData)
// Lock tag
nfc.lockNFCTag(errorCb)
Did You Enjoy This Tutorial?
If so please leave a comment and like this article and please share it on social media! I post weekly so please come back for more content!
Jyrone Parker is an American software engineer and entrepreneur from Louisville, KY. He is the owner and CEO of J Computer Solutions LLC, a full-scale IT agency that deals with hardware, networking, and software development. He is also a tech rapper and producer that goes by the name Mastashake.
Follow Me On Youtube!
Follow my YouTube account
Become A Sponsor
Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.
Join The Newsletter
By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!
Check The Shop
You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.
Of all my favorite HTML APIs the WebTransport API is definitely TOP 3. MDN explains the WebTransport API as such:
The WebTransport interface of the WebTransport API provides functionality to enable a user agent to connect to an HTTP/3 server, initiate reliable and unreliable transport in either or both directions, and close the connection once it is no longer needed.
It allows a web page to connect to a bidirectional HTTP/3 server to send and receive UDP datagrams either reliably or unreliably. Seriously why is no one talking about this?! In this article/tutorial, I will go through the process of creating an NPM package called ShakePort
Real-life image of me raging on the lack of WebTransport talk!
HTTP/3 is 3x faster than HTTP/1.1 and has much less latency than its predecessors. Built using QUIC for data transfer using UDP instead of TCP. What this means for real-time sensitive applications is faster data faster execution.
Listen To Some Hacker Music While You Code
Follow me on Spotify I make Tech Trap music
Dissecting The WebTransport Class
Constructor
The constructor for the WebTransport takes in a url and an options parameter. The URL points to an instance of a HTTP/3 server to connect to. The options parameter is an optional variable that passes in a JSON object
url
A string representing the URL of the HTTP/3 server to connect to. The scheme needs to be HTTPS, and the port number needs to be explicitly specified.options Optional
An object containing the following properties:serverCertificateHashes Optional
An array of WebTransportHash objects. If specified, it allows the website to connect to a server by authenticating the certificate against the expected certificate hash instead of using the Web public key infrastructure (PKI). This feature allows Web developers to connect to WebTransport servers that would normally find obtaining a publicly trusted certificate challenging, such as hosts that are not publicly routable, or ephemeral hosts like virtual machines.
WebTransportHash objects contain two properties:algorithm
A string representing the algorithm to use to verify the hash. Any hash using an unknown algorithm will be ignored.value
A BufferSource representing the hash value.
We instantiate a new instance of WebTransport with the following command
new WebTransport(url, options)
Instance Properties
The closed read-only property of the WebTransport interface returns a promise that resolves when the transport is closed.
The datagrams read-only property of the WebTransport interface returns a WebTransportDatagramDuplexStream instance that can be used to send and receive datagrams — unreliable data transmission.
“Unreliable” means that transmission of data is not guaranteed, nor is arrival in a specific order. This is fine in some situations and provides very fast delivery. For example, you might want to transmit regular game state updates where each message supersedes the last one that arrives, and order is not important.
The incomingBidirectionalStreams read-only property of the WebTransport interface represents one or more bidirectional streams opened by the server. Returns a ReadableStream of WebTransportBidirectionalStream objects. Each one can be used to reliably read data from the server and write data back to it.
The incomingUnidirectionalStreams read-only property of the WebTransport interface represents one or more unidirectional streams opened by the server. Returns a ReadableStream of WebTransportReceiveStream objects. Each one can be used to reliably read data from the server.
The ready read-only property of the WebTransport interface returns a promise that resolves when the transport is ready to use.
Instance Methods
The close() method of the WebTransport interface closes an ongoing WebTransport session.
The createBidirectionalStream() method of the WebTransport interface opens a bidirectional stream; it returns a WebTransportBidirectionalStream object containing readable and writable properties, which can be used to reliably read from and write to the server.
The createUnidirectionalStream() method of the WebTransport interface opens a unidirectional stream; it returns a WritableStream object that can be used to reliably write data to the server.
Use Cases For WebTransport
Web Gaming
WebTransport allows not only for better performance web games in multiplayer, but it also allows for CROSS-SYSTEM PLAY! Since WebTransport is an HTTP/3 web standard you can implement it on the web, PlayStation, Xbox, and whatever else may come in the future! You may be wondering why use WebTransport instead of WebRTC. For 1 v 1 multiplayer games WebRTC will do just fine, but what if you want to build a Battle Royale style battle game that’s 50 v 50? If you are using WebRTC you will run into latency issues because WebRTC is peer-to-peer whereas WebTransport is client-server. Using UDP packets so order does not matter which is what you want for gaming.
WebTransport is a major win for gaming in the web.
IOT
With WebTransport communicating with IOT devices via the web just got a whole lot easier. You can manage a fleet of hardware devices, get analytical data and issue commands in real-time. I am currently using a WebTransport server on my Raspberry Pi 4 and a Vue frontend to remotely control my Pi!
Machine Learning
Machine learning requires large datasets. By utilizing WebTransport you can send user data to your server in real-time to get better insights on your machine learning models. Take for example you are building a recommendation engine. As the user browses the site, you are sending data in real time based on what they are looking at, etc. The faster you can collect and analyze data, the more profitable your company can become.
Pub/Sub
Using the WebTransport API you can implement a pub/sub system. The simplest use case would be a notification engine (think game HUD updates in multiplayer). You can also do things like implement real-time tickers instead of relying on long-polling techniques.
Why I Created ShakePort
I created ShakePort as a basis for my real-time apps and more importantly, I’m building a game and need multiplayer networking. I decided at the beginning of the year I would post on average 1 open source package a month. So far I’m at 4! My philosophy is if you find yourself doing certain code over and over, just package that sh!t up and release it to the world. Chances are there are other developers who you are helping OR you could find devs to help make your package even better! The ShakePort suite is made up of a client (ShakePort Client) and a server (ShakePort Server) this tutorial will focus on the ShakePort Client, I will post the ShakePort Server tutorial later in the year once I finish.
The Code
This package is actually very simple, it only consists of two files:
A WebWorker file
The ShakePortClient class
Almost all of the heavy work is offloaded to the WebWorker to optimize speed and performance and utilizes window.postMessage() to send data to the main application. This way the developer has custom control on how to deal with the datagrams.
Scaffolding
Create a new directory and run the npm init command to create a new NPM package mkdir shakeport-client
cd shakeport-client && npm init
The WebWorker
Create a worker.js file in the root of the project and input the following:
let transport, stream = null
onmessage = (e) => {
try {
switch(e.data.event) {
case 'start':
transport = initTransport(e.data.url, e.data.options)
postMessage({event:'start', transport:transport})
break;
case 'setup-bidirectional':
stream = setUpBidirectional()
readData(stream.readable)
break;
case 'write-bidirectional':
writeData(stream.writable, e.data.data)
break;
case 'data':
break;
case 'stop':
closeTransport(e.transport)
break;
}
} catch {
postMessage({event: 'error'});
}
}
async function initTransport(url, options = {}) {
// Initialize transport connection
const transport = new WebTransport(url, options);
// The connection can be used once ready fulfills
await transport.ready;
return transport
}
async function readData(readable) {
const reader = readable.getReader();
while (true) {
const {value, done} = await reader.read();
if (done) {
break;
}
// value is a Uint8Array.
postMessage({event: 'data-read', data:value});
}
}
async function writeData(writable, data) {
const writer = writable.getWriter();
writer.write(data)
postMessage({event: 'data-written', data: data})
}
async function setUpBidirectional() {
const stream = await transport.createBidirectionalStream();
// stream is a WebTransportBidirectionalStream
// stream.readable is a ReadableStream
// stream.writable is a WritableStream
return stream
}
async function setUpUnidirectional() {
const stream = await transport.createUnidirectionalStream();
// stream is a WebTransportBidirectionalStream
// stream.readable is a ReadableStream
// stream.writable is a WritableStream
return stream
}
async function closeTransport(transport) {
// Respond to connection closing
try {
await transport.closed;
console.log(`The HTTP/3 connection to ${url} closed gracefully.`);
} catch(error) {
console.error(`The HTTP/3 connection to ${url} closed due to ${error}.`);
}
}
The ShakePortClient Class
Create a class file called ShakePortClient.js in the root directory and fill it in:
npm install @mastashake08/shakeport-client Then import and use
Import In Your Project
import ShakePortClient from '@mastashake08/shakeport-client'
const spc = new ShakePortClient();
spc.startClient({
url:'<webtransport_server_url>'
})
Responding To Messages
window.addEventListener("message", (event) => {
// Do we trust the sender of this message? (might be
// different from what we originally opened, for example).
if (event.origin !== "http://example.com")
return;
// event.source is popup
// event.data is "hi there yourself! the secret response is: rheeeeet!"
}, false);
Jyrone Parker is an American software engineer and entrepreneur from Louisville, KY. He is the owner and CEO of J Computer Solutions LLC, a full-scale IT agency that deals with hardware, networking, and software development. He is also a tech rapper and producer that goes by the name Mastashake.
Follow Me On Youtube!
Follow my YouTube account
Become A Sponsor
Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.
Join The Newsletter
By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!
Check The Shop
You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.
I Created A Laravel 10 OpenAI ChatGPT Composer Package
In my last tutorial, I created a Laravel site that featured an OpenAI ChatGPT API. This was very fun to create and while I was thinking of ways to improve it, the idea dawned upon me to make it a Composer package and share it with the world. This took less time than I expected honestly and I already have integrated my package into a few applications (it feels good to composer require your own shit!).
What’s The Benefit?
1. Reusability
I know for a fact that I will be using OpenAI in a majority of my projects going forward, instead of rewriting functionality over and over and over, I’m going to package up the functionality that I know I will need every time.
Work smarter, not harder
2. Modularity
Breaking my code into modules allows me to think about my applications from a high-level view. I call it Lego Theory; all of my modules are legos and my app is the lego castle I’m building.
3. Discoverability
Publishing packages directly helps my brand via discoverability. If I produce high-quality, in-demand open-source software then more people will use it, and the more people that use it then the more people know who I am. This helps me when I am doing things like applying for contracts or conference talks.
Creating The Code
Scaffold From Spatie
The wonderful engineers at Spatie have created a package skeleton for Laravel and is what I used as a starting point for my Laravel package. If you are using GitHub you can use this repo as a template or if using the command line enter the following command:
There is a configure.php script that will set all of the placeholder values with the values you provide for your package
php ./configure.php
Now we can get to the nitty gritty.
Listen To Some Hacker Music While You Code
Follow me on Spotify I make Tech Trap music
The Front-Facing Object
After running the configure script you will have a main object that will be renamed, in my case it was called LaravelOpenaiApi.php and it looks like this:
<?php
namespace Mastashake\LaravelOpenaiApi;
use OpenAI\Laravel\Facades\OpenAI;
use Mastashake\LaravelOpenaiApi\Models\Prompt;
class LaravelOpenaiApi
{
function generateResult(string $type, array $data): Prompt {
switch ($type) {
case 'text':
return $this->generateText($data);
case 'image':
return $this->generateImage($data);
}
}
function generateText($data) {
$result = OpenAI::completions()->create($data);
return $this->savePrompt($result, $data);
}
function generateImage($data) {
$result = OpenAI::images()->create($data);
return $this->savePrompt($result, $data);
}
private function savePrompt($result, $data): Prompt {
$prompt = new Prompt([
'prompt_text' => $data['prompt'],
'data' => $result
]);
return $prompt;
}
}
It can generate text and images and save the prompts, it looks at the type provided to determine what resource to generate. It’s all powered by the OpenAI Laravel Facade.
The Migration
The default migration will be edited to use the prompts migration from the Laravel API tutorial, open it up and replace the contents with the following:
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
/**
* Run the migrations.
*/
public function up(): void
{
Schema::create('prompts', function (Blueprint $table) {
$table->id();
$table->string('prompt_text');
$table->json('data');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*/
public function down(): void
{
Schema::dropIfExists('prompts');
}
};
The Model
Create a file called src/Models/Prompt.php and copy the old Prompt code inside
<?php
namespace Mastashake\LaravelOpenaiApi\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Prompt extends Model
{
use HasFactory;
protected $guarded = [];
protected $casts = [
'data' => 'array'
];
}
The Controller
For the controllers, we have to create a BaseController and a PromptController. Create a file called src/Http/Controllers/BaseController.php
<?php
namespace Mastashake\LaravelOpenaiApi\Http\Controllers;
use Illuminate\Foundation\Bus\DispatchesJobs;
use Illuminate\Routing\Controller as BaseController;
use Illuminate\Foundation\Validation\ValidatesRequests;
use Illuminate\Foundation\Auth\Access\AuthorizesRequests;
class Controller extends BaseController
{
use AuthorizesRequests, DispatchesJobs, ValidatesRequests;
}
Now we will create our PromptController and inherit from the BaseController
<?php
namespace Mastashake\LaravelOpenaiApi\Http\Controllers;
use Illuminate\Http\Request;
use Mastashake\LaravelOpenaiApi\LaravelOpenaiApi;
class PromptController extends Controller
{
//
function generateResult(Request $request) {
$ai = new LaravelOpenaiApi();
$prompt = $ai->generateResult($request->type, $request->except(['type']));
return response()->json([
'data' => $prompt
]);
}
}
OpenAI and ChatGPT can generate multiple types of responses, so we want the user to be able to choose which type of resource they want to generate then pass on that data to the underlying engine.
The Route
Create a routes/api.php file to store our api route:
<?php
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;
/*
|--------------------------------------------------------------------------
| API Routes
|--------------------------------------------------------------------------
|
| Here is where you can register API routes for your application. These
| routes are loaded by the RouteServiceProvider and all of them will
| be assigned to the "api" middleware group. Make something great!
|
*/
Route::group(['prefix' => '/api'], function(){
if(config('openai.use_sanctum') == true){
Route::middleware(['api','auth:sanctum'])->post(config('openai.api_url'),'Mastashake\LaravelOpenaiApi\Http\Controllers\PromptController@generateResult');
} else {
Route::post(config('openai.api_url'),'Mastashake\LaravelOpenaiApi\Http\Controllers\PromptController@generateResult');
}
});
Depending on the values in the config file (we will get to it in a second calm down) the user may want to use Laravel Sanctum for token-based authenticated requests. In fact, I highly suggest you do if you don’t want your token usage abused, but for development and testing, I suppose it’s fine. I made it this way to make it more robust and extensible.
The Config File
Create a file called config/openai.php that will hold the default config values for the package. This will be published into any application that you install this package in:
<?php
return [
/*
|--------------------------------------------------------------------------
| OpenAI API Key and Organization
|--------------------------------------------------------------------------
|
| Here you may specify your OpenAI API Key and organization. This will be
| used to authenticate with the OpenAI API - you can find your API key
| and organization on your OpenAI dashboard, at https://openai.com.
*/
'api_key' => env('OPENAI_API_KEY'),
'organization' => env('OPENAI_ORGANIZATION'),
'api_url' => env('OPENAI_API_URL') !== null ? env('OPENAI_API_URL') : '/generate-result',
'use_sanctum' => env('OPENAI_USE_SANCTUM') !== null ? env('OPENAI_USE_SANCTUM') == true : false
];
The api_key variable is the OpenAI API key
The organization variable is the OpenAI organization if one exists
The api_url variable is the user-defined URL for the API routes, if one is not defined then use /api/generate-result
The use_sanctum variable defines if the API will use auth:sanctum middleware.
The Command
The package includes an artisan command for generating results from the command line. Create a file called src/Commands/LaravelOpenaiApiCommand.php
<?php
namespace Mastashake\LaravelOpenaiApi\Commands;
use Illuminate\Console\Command;
use Mastashake\LaravelOpenaiApi\LaravelOpenaiApi;
class LaravelOpenaiApiCommand extends Command
{
public $signature = 'laravel-openai-api:generate-result';
public $description = 'Generate Result';
public function handle(): int
{
$data = [];
$suffix = null;
$n = 1;
$temperature = 1;
$displayJson = false;
$max_tokens = 16;
$type = $this->choice(
'What are you generating?',
['text', 'image'],
0
);
$prompt = $this->ask('Enter the prompt');
$data['prompt'] = $prompt;
if($type == 'text') {
$model = $this->choice(
'What model do you want to use?',
['text-davinci-003', 'text-curie-001', 'text-babbage-001', 'text-ada-001'],
0
);
$data['model'] = $model;
if ($this->confirm('Do you wish to add a suffix to the generated result?')) {
//
$suffix = $this->ask('What is the suffix?');
}
$data['suffix'] = $suffix;
if ($this->confirm('Do you wish to set the max tokens used(defaults to 16)?')) {
$max_tokens = (int)$this->ask('Max number of tokens to use?');
}
$data['max_tokens'] = $max_tokens;
if ($this->confirm('Change temperature')) {
$temperature = (float)$this->ask('What is the temperature(0-2)?');
$data['temperature'] = $temperature;
}
}
if ($this->confirm('Multiple results?')) {
$n = (int)$this->ask('Number of results?');
$data['n'] = $n;
}
$displayJson = $this->confirm('Display JSON results?');
$ai = new LaravelOpenaiApi();
$result = $ai->generateResult($type,$data);
if ($displayJson) {
$this->comment($result);
}
if($type == 'text') {
$choices = $result->data['choices'];
foreach($choices as $choice) {
$this->comment($choice['text']);
}
} else {
$images = $result->data['data'];
foreach($images as $image) {
$this->comment($image['url']);
}
}
return self::SUCCESS;
}
}
I’m going to add more inputs later, but for now, this is a good starting point to get back data. I tried to make it as verbose as possible, I’m always welcoming PRs if you want to add functionality 🙂
The Service Provider
All Laravel packages must have a service provider, open up the default one in the root directory, in my case it was called LaravelOpenaiApiServiceProvider
<?php
namespace Mastashake\LaravelOpenaiApi;
use Spatie\LaravelPackageTools\Package;
use Spatie\LaravelPackageTools\PackageServiceProvider;
use Mastashake\LaravelOpenaiApi\Commands\LaravelOpenaiApiCommand;
use Spatie\LaravelPackageTools\Commands\InstallCommand;
class LaravelOpenaiApiServiceProvider extends PackageServiceProvider
{
public function configurePackage(Package $package): void
{
/*
* This class is a Package Service Provider
*
* More info: https://github.com/spatie/laravel-package-tools
*/
$package
->name('laravel-openai-api')
->hasConfigFile(['openai'])
->hasRoute('api')
->hasMigration('create_openai_api_table')
->hasCommand(LaravelOpenaiApiCommand::class)
->hasInstallCommand(function(InstallCommand $command) {
$command
->publishConfigFile()
->publishMigrations()
->askToRunMigrations()
->copyAndRegisterServiceProviderInApp()
->askToStarRepoOnGitHub('mastashake08/laravel-openai-api');
}
);
}
}
The name is the name of our package, next we pass in the config file created above. Of course we have to add our API routes and migrations. Lastly, we add our commands.
Testing It In Laravel Project
composer require mastashake08/laravel-openai-api
You can run that command in any Laravel project, I used it in the Laravel API tutorial I did last week. If you runphp artisan route:list and you will see the API is in your project!
Hey look mom it’s my package!!!
Check The Repo!
This is actually my first-ever Composer package! I would love feedback, stars, and PRs that would go a long way. You can check out the repo here on GitHub. Please let me know in the comments if this tutorial was helpful and share on social media.
Jyrone Parker is an American software engineer and entrepreneur from Louisville, KY. He is the owner and CEO of J Computer Solutions LLC, a full-scale IT agency that deals with hardware, networking, and software development. He is also a tech rapper and producer that goes by the name Mastashake.
Follow Me On Youtube!
Follow my YouTube account
Become A Sponsor
Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.
Join The Newsletter
By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!
Check The Shop
You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.
Since its launch, it has taken the world by storm. People are losing their minds over the impending AI overlords destroying society and making humanity its slave 😂
This is not our reality people no time soon anyway
Seriously though we are talking BILLIONS of weights across MILLIONS of neurons I geek out every time I think about it. I keep getting asked what my thoughts are on ChatGPT and instead of repeating myself for the 99934394398th time, I decided to write a blog and do a Laravel code tutorial to show developers just HOW EASY it is to use this OpenAI software.
Use Cases For ChatGPT
Asset Generation
With ChatGPT you can generate images This was first made popular with the DALL-E project. AI-generated art is on the rise and there are many opportunities to be had. Images aren’t the end though, with ChatGPT you can generate any asset. Think about how this will affect gaming! You can generate 3D assets, audio assets, texture assets, and more.
Code Generation
In my opinion, this is where ChatGPT shines. The Codex project allows you to use ChatGPT to generate code, and the results are scarily amazing. If you are a solo developer you can leverage the power of artificial intelligence to speed run through proof of concepts. I have seen videos of people programming whole apps with ChatGPT
Text Generation
Using ChatGPT you can generate text. Many companies are integrating ChatGPT to create contextual accurate text responses. One of my favorite integrations of this is the Twitter bot ChatGPTBot. However, some people are scared of this technology such as the Rabbi who used AI to create a sermon. I personally think e-commerce will be dominated by AI-driven product descriptions.
I have integrated ChatGPT into MobiSnacks to create product descriptions for chefs. The chefs can put in keywords and ChatGPT spits out 3 descriptions for the chefs to use as a starting point. The next step is to use ChatGPT to generate contextual ads for the platform and for the chefs as an additional service.
GPT Audiobook
GPT Audiobook logo
I created a proof of concept called GPT Audiobook. It uses ChatGPT to create audiobooks and spits them out as SSML documents for text-to-speech software to read. I’m currently creating an Android and iOS app to go with the web app. In the future, I plan on adding rich structured data snippets to display the books on Google and other search engines. Even the logo for GPT Audiobook was MADE WITH CHATGPT!
The Laravel ChatGPT API
Overview
The Laravel API will be very simple: one route, one model, and one controller. The model will be called Prompt a prompt will have two fields, prompt_text and data. The controller will have one method called generateResult that will use the OpenAI SDK to communicate with ChatGPT and generate the result. Finally, there will be a POST route called /generate-result which saves the model and returns the JSON.
Listen To Some Hacker Music While You Code
Follow me on Spotify I make Tech Trap music
Creating The Application
For this tutorial, I am using a Mac with docker. To start open up the tutorial and create a new Laravel application
Afterward cd into the application and add the OpenAI Laravel package, which will power our ChatGPT logic.
composer require openai-php/laravel
This is the only composer requirement we will need for this tutorial. Now we need to do our configuration for OpenAI.
Configuring The OpenAI SDK
The OpenAI Laravel package comes with some config files that need to be published before we get started coding. In your terminal paste the following command
This will create a config/openai.php configuration file in your project, which you can modify to your needs using environment variables: You need to retrieve your OpenAI developer key from here and paste it in your .env file.
OPENAI_API_KEY=sk-...
Ok, that’s it for the SDK configuration.
Database & Model
The Prompt model will have a prompt_text field that will hold the text entered by the user. It will also have a data json field that holds the result from OpenAI. Let’s create the model and the migration all in one:
./vendor/bin/sail artisan make:model -m Prompt
Open up the created migration and paste in the following:
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
/**
* Run the migrations.
*/
public function up(): void
{
Schema::create('prompts', function (Blueprint $table) {
$table->id();
$table->string('prompt_text');
$table->json('data');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*/
public function down(): void
{
Schema::dropIfExists('prompts');
}
};
Next open up the Prompt model and paste the following:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Prompt extends Model
{
use HasFactory;
protected $guarded = [];
protected $casts = [
'data' => 'array'
];
}
Open it up and let’s create our generateResult function:
<?php
namespace App\Http\Controllers;
use OpenAI\Laravel\Facades\OpenAI;
use App\Models\Prompt;
use Illuminate\Http\Request;
class PromptController extends Controller
{
//
function generateResult(Request $request) {
$result = OpenAI::completions()->create($request->all());
$prompt = new Prompt([
'prompt_text' => $request->prompt,
'data' => $result
]);
return response()->json($prompt);
}
}
So what’s going on here? We import the OpenAI SDK and we simply pass the $request to the completions API. If you need a reference you can check the OpenAI API reference. We then create a new prompt model pass in the text and pass in the resulting data. The last thing to do is create the route and we are done!
Creating The API Route
Open up the routes/api.php routes file and update it to call the PromptController@generateResult function
<?php
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;
/*
|--------------------------------------------------------------------------
| API Routes
|--------------------------------------------------------------------------
|
| Here is where you can register API routes for your application. These
| routes are loaded by the RouteServiceProvider and all of them will
| be assigned to the "api" middleware group. Make something great!
|
*/
Route::middleware('auth:sanctum')->get('/user', function (Request $request) {
return $request->user();
});
Route::post('/generate-result','App\Http\Controllers\PromptController@generateResult');
Now we are done, make sure you plug in your API key and make a test request! Here is how we can test with cURL:
curl -X POST http://localhost/api/generate-result -H "Content-Type: application/json" --data-binary @- <<DATA
{
"model": "text-davinci-003",
"prompt" : "PHP is"
}
DATA
Next Steps
The next step that I want to do with this project is to create it as a Laravel package so developers can put an OpenAI ChatGPT API in their backends easily. Afterward, I would like to add functionality for issuing tokens and possibly even a monetization module powered by Stripe and Laravel Cashier. Please leave comments on this article and let me know what you would like to see and I will build it! You can see the GitHub repository here.
**UPDATE** I Created The Laravel Composer Package
I couldn’t resist an opportunity to create a composer package!
Shortly after writing this tutorial, I went ahead and created a Composer package for the Laravel OpenAI ChatGPT API. If you want to implement this functionality from the tutorial and more then please check it out! I’m actively looking for PRs from fellow developers! I can’t wait to see how you all use and integrate this package into your web applications and business services!
You can install the package using the following command:
composer require mastashake08/laravel-openai-api
Afterward you can publish the migrations and config files with the following commands:
Finally, start to use it in your code! You can access the object directly, via the included API routes, or with the interactive Artisan CLI command.
Via Code
$laravelOpenaiApi = new Mastashake\LaravelOpenaiApi();
echo $laravelOpenaiApi->generateResult($type, $data);
Via Artisan
php artisan laravel-openai-api:generate-result
Via API
You set the OPENAI_API_URL in the .env file if a value is not set then it defaults to /api/generate-result
/api/generate-result POST {openai_data}
The data object requires a type property that is either set to text or image. Depending on which type then provide the JSON referenced in the OpenAI API Reference
I’m going to continue to work on both this package and the demo tutorial and will update you all on the progress for sure. Thank you for taking time to read this tutorial, if you found it helpful please leave a comment and a like!
Jyrone Parker is an American software engineer and entrepreneur from Louisville, KY. He is the owner and CEO of J Computer Solutions LLC, a full-scale IT agency that deals with hardware, networking, and software development. He is also a tech rapper and producer that goes by the name Mastashake.
Follow Me On Youtube!
Follow my YouTube account
Become A Sponsor
Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.
Join The Newsletter
By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!
Check The Shop
You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.
Speech Recognition & Speech Synthesis In The Browser With Web Speech API
Voice apps are now first-class citizens on the web thanks to the Speech Recognition and the Speech Synthesis interfaces which are a part of the bigger Web Speech API. Taken from the MDN docs
The Web Speech API makes web apps able to handle voice data. There are two components to this API:
Speech recognition is accessed via the SpeechRecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device’s default speech recognition service) and respond appropriately. Generally you’ll use the interface’s constructor to create a new SpeechRecognition object, which has a number of event handlers available for detecting when speech is input through the device’s microphone. The SpeechGrammar interface represents a container for a particular set of grammar that your app should recognize. Grammar is defined using JSpeech Grammar Format (JSGF.)
Speech synthesis is accessed via the SpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device’s default speech synthesizer.) Different voice types are represented by SpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance objects. You can get these spoken by passing them to the SpeechSynthesis.speak() method.
Brief on Web Speech API from MDN
So basically with the Web Speech API you can work with voice data. You can make your apps speak to its users and you can run commands based on what your user speaks. This opens up a host of opportunities for voice-activated CLIENT-SIDE apps. I love building open-source software, so I decided to create an NPM package to work with the Web Speech API called SpeechKit and I couldn’t wait to share it with you! I suppose this is a continuation of Creating A Voice Powered Note App Using Web Speech
Simplifying The Process With SpeechKit
I decided starting this year I would contribute more to the open-source community and provide packages (primarily Javascript, PHP, and Rust) to the world to use. I use the Web Speech API a lot in my personal projects and so why not make it an NPM package? You can find the source code here.
Listen To Some Hacker Music While You Code
Follow me on Spotify I make Tech Trap music
Features
Speak Commands
Listen for voice commands
Add your own grammar
Transcribe words and output as file.
Generate SSML from text
npm install @mastashake08/speech-kit
Import
import SpeechKit from '@mastashake08/speech-kit'
Instantiate A New Instance
new SpeechKit(options)
listen()
Start listening for speech recognition.
stopListen()
Stop listening for speech recognition.
speak(text)
Use Speech Synthesis to speak text.
Param
Type
Description
text
string
Text to be spoken
getResultList() ⇒ SpeechRecognitionResultList
Get current SpeechRecognition resultsList.
Returns: SpeechRecognitionResultList – – List of Speech Recognition results
getText() ⇒ string
Return text
Returns: string – resultList as text string
getTextAsFile() ⇒ Blob
Return text file with results.
Returns: Blob – transcript
getTextAsJson() ⇒ object
Return text as JSON.
Returns: object – transcript
addGrammarFromUri()
Add grammar to the SpeechGrammarList from a URI.
Params: string uri – URI that contains grammar
addGrammarFromString()
Add grammar to the SpeechGrammarList from a Grammar String.
Returns: SpeechGrammarList – current SpeechGrammarList object
getRecognition() ⇒ SpeechRecognition
Return the urrent SpeechRecognition object.
Returns: SpeechRecognition – current SpeechRecognition object
getSynth() ⇒ SpeechSynthesis
Return the current Speech Synthesis object.
Returns: SpeechSynthesis – current instance of Speech Synthesis object
getVoices() ⇒ Array<SpeechSynthesisVoice>
Return the current voices available to the user.
Returns: Array<SpeechSynthesisVoice> – Array of available Speech Synthesis Voices
setSpeechText()
Set the SpeechSynthesisUtterance object with the text that is meant to be spoken.
Params: string text – Text to be spoken
setSpeechVoice()
Set the SpeechSynthesisVoice object with the desired voice.
Params: SpeechSynthesisVoice voice – Voice to be spoken
getCurrentVoice() ⇒ SpeechSynthesisVoice
Return the current voice being used in the utterance.
Returns: SpeechSynthesisVoice – current voice
Example Application
In this example vue.js application there will be a text box with three buttons underneath, when the user clicks the listen button, SpeechKit will start listening to the user. As speech is detected, the text will appear in the text box. The first button under the textbox will tell the browser to share the page, the second button will speak the text in the textbox while the third button will control recording.
Home page from the github.io page
I created this in Vue.js and (for sake of time and laziness) I reused all of the defaul components and rewrote the HelloWorld component. So let’s get started by creating a new Vue application.
Creating The Application
Open up your terminal and input the following command to create a new vue application:
vue create speech-kit-demo
It doesn’t really matter what settings you choose, after you get that squared away, now it is time to add our dependecy.
Installing SpeechKit
Still inside your terminal we will add the SpeechKit dependency to our package.json file with the following command:
npm install @mastashake08/speech-kit
Now with that out of the way we can begin creating our component functionality.
Editing HelloWorld.vue
Open up your HelloWorld.vue file in your components/ folder and change it to look like this:
<template>
<div class="hello">
<h1>{{ msg }}</h1>
<p>
Simple demo to demonstrate the Web Speech API using the
<a href="https://github.com/@mastashake08/speech-kit" target="_blank" rel="noopener">SpeechKit npm package</a>!
</p>
<textarea v-model="voiceText"/>
<ul>
<button @click="share" >Share</button>
<button @click="speak">Speak</button>
<button @click="listen" v-if="!isListen">Listen</button>
<button @click="stopListen" v-else>Stop Listen</button>
</ul>
</div>
</template>
<script>
import SpeechKit from '@mastashake08/speech-kit'
export default {
name: 'HelloWorld',
props: {
msg: String
},
mounted () {
this.sk = new SpeechKit({rate: 0.85})
document.addEventListener('onspeechkitresult', (e) => this.getText(e))
},
data () {
return {
voiceText: 'SPEAK ME',
sk: {},
isListen: false
}
},
methods: {
share () {
const text = `Check out the SpeechKit Demo and speak this text! ${this.voiceText} ${document.URL}`
try {
if (!navigator.canShare) {
this.clipBoard(text)
} else {
navigator.share({
text: text,
url: document.URL
})
}
} catch (e) {
this.clipBoard(text)
}
},
async clipBoard (text) {
const type = "text/plain";
const blob = new Blob([text], { type });
const data = [new window.ClipboardItem({ [type]: blob })];
await navigator.clipboard.write(data)
alert ('Text copied to clipboard')
},
speak () {
this.sk.speak(this.voiceText)
},
listen () {
this.sk.listen()
this.isListen = !this.isListen
},
stopListen () {
this.sk.stopListen()
this.isListen = !this.isListen
},
getText (evt) {
this.voiceText = evt.detail.transcript
}
}
}
</script>
<!-- Add "scoped" attribute to limit CSS to this component only -->
<style scoped>
h3 {
margin: 40px 0 0;
}
ul {
list-style-type: none;
padding: 0;
}
li {
display: inline-block;
margin: 0 10px;
}
a {
color: #42b983;
}
</style>
As you can see the almost all of the functionality is being offloaded to the SpeechKit library. You can see a live version of this at https://mastashake08.github.io/speech-kit-demo/ . In the mount() method we initialize our SpeechKit instance and add an event listener on the document to listen for the onspeechkitresult event emitted from the SpeechKit class which dispatches everytime there is an availble transcript from speech recognition. The listen() and stopListen() functions simply call the SpeechKit functions and toggle a boolean indicating recording is in process. Finally the share() function uses the Web Share API to share the URL if available, otherwise it defaults to using the Clipboard API and copying the text to the user’s clipboard for manual sharing.
Want To See More Tutorials?
Join my newsletter and get weekly updates from my blog delivered straight to your inbox.
Check The Shop!
Consider purchasing an item from the #CodeLife shop, all proceeds go towards our coding initiatives.
Jyrone Parker is an American software engineer and entrepreneur from Louisville, KY. He is the owner and CEO of J Computer Solutions LLC, a full-scale IT agency that deals with hardware, networking, and software development. He is also a tech rapper and producer that goes by the name Mastashake.
Follow Me On Youtube!
Follow my YouTube account
Become A Sponsor
Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.
Join The Newsletter
By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!
Check The Shop
You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.
Imagine my frustration when I get dozens of DMs, emails, and other messages asking when I was going to upgrade my Discord Twitter bot to be compliant with the latest Twitter changes. Like damn bro, I have other things to do lol but alas I can’t let my peeps down. In this blog entry, I will show you what I did to upgrade my codebase to use the Twitter V2 API to communicate with the Discord server to send out my tweets.
Twitter has been an integral part of social media and has become a platform for information exchange, news updates, and social interactions. Twitter offers an API that allows developers to create applications that can interact with Twitter data. Recently, Twitter introduced a new version of its API called the Twitter V2 API, which includes several updates and improvements. One of the notable features of the Twitter V2 API is the Rules API, which enables developers to create complex filters and rules for retrieving Tweets and other Twitter data.
v2 of the Discord Twitter bot
Upgrading The Package.json File
We are no longer using the Twit npm package and instead using the twitter-v2 npm package. Open your package.json file and change it to the following:
{
"name": "discord-twitter-bot",
"version": "2.0.0",
"description": "A discord bot that sends messages to a channel whenever a specific user tweets.",
"main": "main.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"repository": {
"type": "git",
"url": "git+https://github.com/mastashake08/discord-twitter-bot.git"
},
"keywords": [
"discord",
"twitter",
"bot"
],
"author": "mastashake08",
"license": "ISC",
"bugs": {
"url": "https://github.com/mastashake08/discord-twitter-bot/issues"
},
"homepage": "https://github.com/mastashake08/discord-twitter-bot#readme",
"dependencies": {
"discord.js": "^13.8.1",
"dotenv": "^8.2.0",
"twitter-v2": "^1.1.0"
},
"engines" : {
"npm" : ">=7.0.0",
"node" : ">=16.0.0"
}
}
Listen To Some Hacker Music While You Code
Follow me on Spotify I make Tech Trap music
Changes To The Twitter API
In order to use the stream API, we have to set up stream rules. We want to only show tweets from yourself so in your .env file add a new field
TWITTER_USER_NAME=
Afterward, we listen to the stream pretty much as before. Open up the main.js file and update it to the following.
require('dotenv').config()
const Twit = require('twitter-v2')
const { Client } = require('discord.js');
const client = new Client({ intents: 2048 });
var T = new Twit({
// consumer_key: process.env.TWITTER_CONSUMER_KEY,
// consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
// access_token_key: process.env.TWITTER_ACCESS_TOKEN,
// access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET,
// timeout_ms: 60*1000, // optional HTTP request timeout to apply to all requests.
// strictSSL: true, // optional - requires SSL certificates to be valid.
bearer_token: process.env.BEARER_TOKEN
})
// //only show owner tweets
async function sendMessage (tweet, client){
console.log(tweet)
const url = "https://twitter.com/user/status/" + tweet.id;
try {
const channel = await client.channels.fetch(process.env.DISCORD_CHANNEL_ID)
channel.send(`${process.env.CHANNEL_MESSAGE} ${url}`)
} catch (error) {
console.error(error);
}
}
async function listenForever(streamFactory, dataConsumer) {
try {
for await (const { data } of streamFactory()) {
dataConsumer(data);
}
// The stream has been closed by Twitter. It is usually safe to reconnect.
console.log('Stream disconnected healthily. Reconnecting.');
listenForever(streamFactory, dataConsumer);
} catch (error) {
// An error occurred so we reconnect to the stream. Note that we should
// probably have retry logic here to prevent reconnection after a number of
// closely timed failures (may indicate a problem that is not downstream).
console.warn('Stream disconnected with error. Retrying.', error);
// listenForever(streamFactory, dataConsumer);
}
}
async function setup () {
const endpointParameters = {
'tweet.fields': [ 'author_id', 'conversation_id' ],
'expansions': [ 'author_id', 'referenced_tweets.id' ],
'media.fields': [ 'url' ]
}
try {
console.log('Setting up Twitter....')
const body = {
"add": [
{"value": "from:"+ process.env.TWITTER_USER_NAME, "tag": "from Me!!"}
]
}
const r = await T.post("tweets/search/stream/rules", body);
} catch (err) {
console.log(err)
}
listenForever(
() => T.stream('tweets/search/stream'),
(data) => sendMessage(data, client)
);
}
// Add above rule
client.login(process.env.DISCORD_TOKEN)
client.on('ready', () => {
console.log('Discord ready')
setup()
})
What is the Twitter V2 Rules API?
The Twitter V2 Rules API is a set of endpoints that allows developers to create, manage and delete rules for filtering and retrieving Twitter data. With the Rules API, developers can define specific criteria that must be met for a Tweet or a stream of Tweets to be returned. This means developers can create more sophisticated and complex search queries and filters than before.
The Twitter V2 Rules API provides a comprehensive set of operators that can be used to create rules. These operators include “contains,” “hashtag,” “from,” “to,” “mention,” “URL,” “geo,” “lang,” and “is.” With these operators, developers can create rules based on keywords, hashtags, location, language, and many other criteria.
How to use the Twitter V2 Rules API?
To use the Twitter V2 Rules API, developers need to create a Twitter Developer Account and obtain API keys and access tokens. Once the developer has the necessary credentials, they can use the Rules API to create and manage rules for filtering Twitter data.
To create a rule, developers can use the POST /2/tweets/search/stream/rules endpoint, which accepts a JSON payload containing the rule definition. For example, to create a rule that returns Tweets containing the hashtag “#apple” and “iPhone,” the following JSON payload can be used:
This payload contains the “add” operator, which adds a new rule to the stream. The “value” field contains the search query, and the “tag” field is an optional label that can be used to identify the rule.
Once the rule is added, developers can use the GET /2/tweets/search/stream endpoint to retrieve the stream of Tweets that match the defined rule.
Why use the Twitter V2 Rules API?
The Twitter V2 Rules API offers several benefits for developers. Firstly, it provides more powerful and sophisticated filtering capabilities, allowing developers to create more precise and targeted search queries. This is particularly useful for businesses and organizations that need to monitor Twitter for specific keywords, trends, or events.
Secondly, the Twitter V2 Rules API is more reliable and scalable than previous versions. It supports higher throughput and lower latency, making it easier to retrieve and process large volumes of Twitter data.
Lastly, the Twitter V2 Rules API provides better documentation and support, making it easier for developers to get started and integrate with Twitter. The API also includes features such as pagination and rate limiting, which help developers manage their usage and avoid hitting API limits.
The Twitter V2 Rules API is a powerful and flexible tool for developers who need to retrieve and filter Twitter data. With the Rules API, developers can create complex search queries and filters that enable them to access the specific data they need. This makes the API particularly useful for businesses, organizations, and researchers who need to monitor Twitter for specific keywords, trends, or events. If you are a developer looking to work with Twitter data, the Twitter V2 Rules API is definitely worth exploring.
Usage
Via Docker
docker run --env-file= -d --name= mastashake08/discord-twitter-bot:latest
By running this command you can pull the image directly from Docker Hub all you have to do is pass in a path to your .env file with your tokens.
If you decide to run from source then pull the repo, set the .env and run the code
git clone https://github.com/mastashake08/discord-twitter-bot.git
npm install
cp .env.example .env
#set values for TWITTER and DISCORD APIs in .env
TWITTER_USER_NAME=
DISCORD_TOKEN=
DISCORD_CHANNEL_ID=
BEARER_TOKEN=
CHANNEL_MESSAGE=
node main.js
Congrats, It’s Updated!
See it in action!
That’s pretty much all we had to do to update everything to use the new API. The added benefit is that it won’t show retweets in your discord server like before :0 if you enjoyed this consider becoming a patron on Patreon and help fund in-person coding classes for kids in Louisville, KY!
Jyrone Parker is an American software engineer and entrepreneur from Louisville, KY. He is the owner and CEO of J Computer Solutions LLC, a full-scale IT agency that deals with hardware, networking, and software development. He is also a tech rapper and producer that goes by the name Mastashake.
Follow Me On Youtube!
Follow my YouTube account
Become A Sponsor
Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github or Patreon! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.
Join The Newsletter
By joining the newsletter, you get first access to all of my blogs, events, and other brand-related content delivered directly to your inbox. It’s 100% free and you can opt out at any time!
Check The Shop
You can also consider visiting the official #CodeLife shop! I have my own clothing/accessory line for techies as well as courses designed by me covering a range of software engineering topics.
In my last YouTube video, I was asked to implement Google Drive upload functionality for saving screen recordings. I thought this was a marvelous idea and immediately got to work! We already added OAuth login via Google and Laravel in the last tutorial to interact with the Youtube Data v3 API, so with a few simple backend tweaks, we can add Google Drive as well!
Steps To Accomplish
The functionality I want to add to this is going to be just uploading to Google Drive, with no editing or listing. Keep things simple! This is going to require the following steps
Adding Google Drive scopes to Laravel Socialite
Create a function to upload file to Google API endpoint
Pretty easy if I do say so myself. Let’s get started with the backend.
Adding Google Drive Scopes To Laravel Socialite
We already added scopes for YouTube in the last tutorial so thankfully not a whole lot of work is needed to add Google Drive scopes. Open up your routes/api.php file and update the scopes array to include the new scopes needed to interact with Google Drive
Make sure you enable the API in the Google cloud console! Now we head over to the frontend Vue application and let’s add our markup and functions.
Open the Home.vue and we are going to add a button in our list of actions for uploading to Google Drive
<t-button v-on:click="uploadToDrive" v-if="uploadReady" class="ml-10">Upload To Drive 🗄️</t-button>
In the methods add a function called uploadToDrive() inside put the following
async uploadToDrive () {
let metadata = {
'name': 'Screen Recorder Pro - ' + new Date(), // Filename at Google Drive
'mimeType': 'application/zip', // mimeType at Google Drive
}
let form = new FormData();
form.append('metadata', new Blob([JSON.stringify(metadata)], {type: 'application/json'}));
form.append('file', this.file);
await fetch('https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart', {
method: 'POST', // *GET, POST, PUT, DELETE, etc.
mode: 'cors', // no-cors, *cors, same-origin
cache: 'no-cache',
headers: {
'Content-Length': this.file.length,
Authorization: `Bearer ${this.yt_token}`
},
body: form
})
alert('Video uploaded to Google Drive!')
}
Inside this function we create an HTTP POST request to the Google Drive endpoint for uploading files. We pass a FormData object that contains some metadata about the object and the actual file itself. After the file is uploaded the user is alerted that their video is stored!
Screen Recorder Pro Google Drive upload confirmation
What’s Next?
Next, we will add cloud storage you will be able to share with Amazon S3 and WebShare API! Finally we will add monetization and this project will be wrapped up! If you enjoyed this please give the app a try at https://recorder.jcompsolu.com
In this tutorial series, we will be building a WebRTC Google Meet clone using Vue.js. All of the source code is free and available on Github. If you found this tutorial to be helpful and want to help keep this site free for others consider becoming a patron! The application will allow you to join a room by ID. Anyone who joins that room @ that ID will instantly join the call. In this first iteration, we can share voice, video, and screens!
Setting Up The Vue Application
Let’s go ahead and create the Vue application and add our dependency for WebRTC vue-webrtc. This dependency adds all of the functionality we need in a simple web component!
vue create google-meet-clone; cd google-meet-clone; npm install --save vue-webrtc
All of the functionality is built in the App.vue page (for now) let’s open it up and add the following:
The screen has a text field for putting the roomID which will be used by the vue-webrtc component to connect to a room. We have some events we listen to, which we will do more with in later tutorials. For now there are two buttons, one for connecting /leaving the room and one for sharing your screen. This is it! The package handles everything else and you can test it out here. In the next series we will implement recording functionality so everyone can download the meetings! If you enjoyed this please like and share this blog and subscribe to my YouTube page! In the meantime while you wait check out my screen recorder app tutorial!
So I wanted to add some more functionality to the app that would separate it from the competition (check it out here). At first, I was going to add YouTube functionality where the user could upload the video straight to Youtube. My brother who is a streamer brought up that unless I added editing capabilities to it, there wasn’t much need for that functionality. Instead, I should stream to YouTube. This made much more sense, even in my case I usually stream myself coding from the desktop but instead of downloading cumbersome software, I can do it straight in the browser! For this, I decided to use Laravel Socialite with a YouTube provider, while on the client-side creating a YouTube class with the various functions needed to interact with the API.
Connect To Youtube!
Extending The Laravel Microservice
The Laravel part is pretty simple first we add the Socialite and Youtube Provider packages.
Now we have to edit the app/Providers/EventServiceProvider.php file
<?php
namespace App\Providers;
use Illuminate\Auth\Events\Registered;
use Illuminate\Auth\Listeners\SendEmailVerificationNotification;
use Illuminate\Foundation\Support\Providers\EventServiceProvider as ServiceProvider;
use Illuminate\Support\Facades\Event;
class EventServiceProvider extends ServiceProvider
{
/**
* The event listener mappings for the application.
*
* @var array
*/
protected $listen = [
Registered::class => [
SendEmailVerificationNotification::class,
],
\SocialiteProviders\Manager\SocialiteWasCalled::class => [
// ... other providers
\SocialiteProviders\YouTube\YouTubeExtendSocialite::class.'@handle',
],
];
/**
* Register any events for your application.
*
* @return void
*/
public function boot()
{
//
}
}
Next we need to set the .env file and add the client secret, recorder URL and redirect URL
If you have worked with Laravel Socialite in the past then all of this is familiar. Finally we need to edit our routes/api.php file and add our two API routes for interacting with Youtube.
The callback function redirects us to the web app and the reason for this will become clear next.
The Client Side
On the web app we need to create a Youtube class that will call all of the functions needed for interacting with the API. Not everything is implemented now and will be as the tutorial goes on. Create a new file src/classes/Youtube.js
All of these methods are from the Live and Broadcasts APIs now we will grab the token and init our class! To do this we will create a button that when pressed will open up a new window call the Socialite endpoint, grab the token, close the window, and set the class. First we will create a vuex file and add it to the application open src/store/index.js
We create a universal yt object in the state that represents our Youtube class and we will call the methods. Don’t forget to add the plugin
vue add vuex
Routing
The Youtube API use case requires us to provide a privacy policy so we need to add vue-router and make some new components for the pages.
vue add router
Now create a new file src/router/index.js
import Vue from 'vue'
import VueRouter from 'vue-router'
import Home from '../views/Home.vue'
Vue.use(VueRouter)
const routes = [
{
path: '/',
name: 'Home',
component: Home
},
{
path: '/about',
name: 'About',
// route level code-splitting
// this generates a separate chunk (about.[hash].js) for this route
// which is lazy-loaded when the route is visited.
component: () => import(/* webpackChunkName: "about" */ '../views/About.vue')
},
{
path: '/privacy',
name: 'Privacy',
// route level code-splitting
// this generates a separate chunk (about.[hash].js) for this route
// which is lazy-loaded when the route is visited.
component: () => import(/* webpackChunkName: "about" */ '../views/Privacy.vue')
},
{
path: '/terms',
name: 'TOS',
// route level code-splitting
// this generates a separate chunk (about.[hash].js) for this route
// which is lazy-loaded when the route is visited.
component: () => import(/* webpackChunkName: "about" */ '../views/Terms.vue')
},
{
path: '/success',
name: 'Success',
// route level code-splitting
// this generates a separate chunk (about.[hash].js) for this route
// which is lazy-loaded when the route is visited.
component: () => import(/* webpackChunkName: "about" */ '../views/Success.vue')
}
]
const router = new VueRouter({
routes
})
export default router
The About, Terms and Privacy pages are simply templates with text in them showing the various content needed and for sake of brevity I won’t show those contents as there is no javascript. The Success page however is very important and is responsible for grabbing the Youtube token from the Laravel callback. Let’s explore it src/views/Success.vue
<template>
<div class="Success">
<img alt="Screen Record Pro" src="../assets/logo.svg" class="animate-fade-slow object-contain h-80 w-full">
<h2 class="text-sm tracking-wide font-medium text-gray-500 uppercase">Youtube Connected!</h2>
<p class="text-base font-light leading-relaxed mt-0 mb-4 text-gray-800">
Thank you for authenticating with Screen Record Pro! This window will close automatically
</p>
</div>
</template>
<script>
import { mapActions, mapGetters } from 'vuex'
export default {
name: 'Success',
mounted () {
window.localStorage.setItem('youtube_key', this.$route.query.token)
window.opener.postMessage({youtube_token: this.$route.query.token}, '*')
window.close()
},
computed: {
...mapGetters(['getYoutube'])
},
methods : {
...mapActions(['setYouTube'])
}
}
</script>
Once the page mountes we use localstorage API to set the youtube_key to the token query parameter. This parameter is set when the redirect is called in the /callback/youtube API endpoint. This window will be a popup window, and we need to send a message to the window that opened this window (make sense?). For this we use the window.opener.postMessage() function. We will listen for this message on the home screen and set the youtube object. Now that we have made our router and vuex object we need to redo the main.js and set our Vue object with them. open up main.js
Lastly we need to open the src/views/Home.vue file and edit our application. When it mounts we need to set a listener for message and call the setYoutube method. If the localstorage is already set then we don’t show the button for connecting. If the user is connected then they click a button and it creates a live stream.
<template>
<div id="app">
<img alt="Screen Record Pro" src="../assets/logo.svg" class="animate-fade-slow object-contain h-80 w-full">
<h2 class="text-sm tracking-wide font-medium text-gray-500 uppercase">Free Online Screen Recorder</h2>
<p class="text-base font-light leading-relaxed mt-0 mb-4 text-gray-800">
Free online screen recorder by J Computer Solutions LLC that allows you to
record your screen including microphone audio and save the file to your desktop.
No download required, use this progressive web app in the browser!
J Computer Solutions LLC provides the #1 free online screen capture software! Due to current
browser limitations, this software can only be used on desktop. Please ensure you are on a Windows, MacOS or Linux
computer using Chrome, Firefox or Safari!
</p>
<h1 class="text-3xl font-large text-gray-500 uppercase">To Date We Have Processed: <strong class="animate-pulse text-3xl font-large text-red-500">{{bytes_processed}}</strong> bytes worth of video data!</h1>
<t-modal
header="Email Recording"
ref="modal"
>
<t-input v-model="sendEmail" placeholder="Email Address" name="send-email" />
<template v-slot:footer>
<div class="flex justify-between">
<t-button type="button" @click="$refs.modal.hide()">
Cancel
</t-button>
<t-button type="button" @click="emailFile">
Send File
</t-button>
</div>
</template>
</t-modal>
<div class="mt-5 mb-5">
<t-button v-on:click="connectToYoutube" v-if="!youtube_ready"> Connect To YouTube 📺</t-button>
</div>
<div class="mt-5 mb-5">
<t-button v-on:click="getStream" v-if="!isRecording" v-show="canRecord" class="ml-10"> Start Recording 🎥</t-button>
<div v-else>
<t-button v-on:click="streamToYouTube" @click="createBroadcast" v-if="youtube_ready">Stream To Youtube 📺</t-button>
<t-button v-on:click="stopStream"> Stop Screen Recording ❌ </t-button>
</div>
<t-button v-on:click="download" v-if="fileReady" class="ml-10"> Download Recording 🎬</t-button>
<t-button v-on:click="$refs.modal.show()" autoPictureInPicture="true" v-if="fileReady" class="ml-10"> Email Recording 📧</t-button>
</div>
<div class="mt-5" v-show="fileReady">
<video class="center" height="500px" controls id="video" ></video>
</div>
<Adsense
data-ad-client="ca-pub-7023023584987784"
data-ad-slot="8876566362">
</Adsense>
<footer>
<cookie-law theme="base"></cookie-law>
</footer>
</div>
</template>
<script>
import CookieLaw from 'vue-cookie-law'
import { mapGetters, mapActions } from 'vuex'
export default {
name: 'Home',
components: { CookieLaw },
data() {
return {
youtube_ready: false,
canRecord: true,
isRecording: false,
options: {
audioBitsPerSecond: 128000,
videoBitsPerSecond: 2500000,
mimeType: 'video/webm; codecs=vp9'
},
displayOptions: {
video: {
cursor: "always"
},
audio: {
echoCancellation: true,
noiseSuppression: true,
sampleRate: 44100
}
},
mediaRecorder: {},
stream: {},
recordedChunks: [],
file: null,
fileReady: false,
sendEmail: '',
url: 'https://screen-recorder-micro.jcompsolu.com',
bytes_processed: 0,
}
},
methods: {
...mapActions(['setYouTube', 'streamToYouTube', 'getBroadcasts', 'createBroadcast']),
async connectToYoutube () {
window.open(`${this.url}/api/login/youtube`, "YouTube Login", 'width=800, height=600');
},
async emailFile () {
try {
const fd = new FormData();
fd.append('video', this.file)
fd.append('email', this.sendEmail)
await fetch(`${this.url}/api/email-file`, {
method: 'post',
body: fd
})
this.$gtag.event('email-file-data', {
'name': this.file.name,
'size': this.file.size,
'email': this.sendEmail
})
this.$refs.modal.hide()
this.showNotification()
} catch (err) {
alert(err.message)
}
},
async uploadFileData () {
try {
const fd = new FormData();
fd.append('video', this.file)
await fetch(`${this.url}/api/upload-file-data`, {
method: 'post',
body: fd
})
this.$gtag.event('upload-file-data', {
'name': this.file.name,
'size': this.file.size
})
} catch (e) {
this.$gtag.exception('application-error', e)
}
},
setFile (){
this.file = new Blob(this.recordedChunks, {
type: "video/webm; codecs=vp9"
});
this.$gtag.event('file-set', {
'event_category' : 'Files',
'event_label' : 'File Set'
})
const newObjectUrl = URL.createObjectURL( this.file );
const videoEl = document.getElementById('video')
// URLs created by `URL.createObjectURL` always use the `blob:` URI scheme: https://w3c.github.io/FileAPI/#dfn-createObjectURL
const oldObjectUrl = videoEl.src;
if( oldObjectUrl && oldObjectUrl.startsWith('blob:') ) {
// It is very important to revoke the previous ObjectURL to prevent memory leaks. Un-set the `src` first.
// See https://developer.mozilla.org/en-US/docs/Web/API/URL/createObjectURL
videoEl.src = ''; // <-- Un-set the src property *before* revoking the object URL.
URL.revokeObjectURL( oldObjectUrl );
}
// Then set the new URL:
videoEl.src = newObjectUrl;
// And load it:
videoEl.load();
this.$gtag.event('file-loaded', {
'event_category' : 'Files',
'event_label' : 'File Loaded'
})
videoEl.onloadedmetadata = () => {
this.uploadFileData()
this.getBytes()
}
videoEl.onPlay = () => {
this.$gtag.event('file-played', {
'event_category' : 'Files',
'event_label' : 'File Played'
})
}
this.fileReady = true
},
download: function(){
var url = URL.createObjectURL(this.file);
var a = document.createElement("a");
document.body.appendChild(a);
a.style = "display: none";
a.href = url;
var d = new Date();
var n = d.toUTCString();
a.download = n+".webm";
a.click();
window.URL.revokeObjectURL(url);
this.recordedChunks = []
this.showNotification()
this.$gtag.event('file-downloaded', {
'event_category' : 'Files',
'event_label' : 'File Downloaded'
})
},
showNotification: function() {
this.$gtag.event('notification-shown', {})
var img = '/logo.png';
var text = 'If you enjoyed this product consider donating!';
navigator.serviceWorker.getRegistration().then(function(reg) {
reg.showNotification('Screen Record Pro', { body: text, icon: img, requireInteraction: true,
actions: [
{action: 'donate', title: 'Donate',icon: 'logo.png'},
{action: 'close', title: 'Close',icon: 'logo.png'}
]
});
});
},
handleDataAvailable: function(event) {
if (event.data.size > 0) {
this.recordedChunks.push(event.data);
this.isRecording = false
this.setFile()
} else {
// ...
}
},
async registerPeriodicNewsCheck () {
const registration = await navigator.serviceWorker.ready;
try {
await registration.periodicSync.register('get-latest-stats', {
minInterval: 24 * 60 * 60 * 1000,
});
} catch (e) {
this.$gtag.exception('application-error', e)
}
},
stopStream: function() {
this.mediaRecorder.stop()
this.mediaRecorder = null
this.stream.getTracks()
.forEach(track => track.stop())
this.stream = null
this.$gtag.event('stream-stop', {
'event_category' : 'Streams',
'event_label' : 'Stream Stopped'
})
},
getStream: async function() {
try {
this.stream = await navigator.mediaDevices.getDisplayMedia(this.displayOptions);
this.stream.getVideoTracks()[0].onended = () => { // Click on browser UI stop sharing button
this.stream.getTracks()
.forEach(track => track.stop())
};
const audioStream = await navigator.mediaDevices.getUserMedia({audio: true}).catch(e => {throw e});
const audioTrack = audioStream.getAudioTracks();
// add audio track
this.stream.addTrack(audioTrack[0])
this.mediaRecorder = new MediaRecorder(this.stream)
this.mediaRecorder.ondataavailable = this.handleDataAvailable;
this.mediaRecorder.start();
this.isRecording = true
this.$gtag.event('stream-start', {
'event_category' : 'Streams',
'event_label' : 'Stream Started'
})
} catch(e) {
this.isRecording = false
this.$gtag.exception('application-error', e)
}
},
async getBytes () {
const result = await fetch(`${this.url}/api/get-stats`)
this.bytes_processed = await result.json()
},
skipDownloadUseCache () {
this.bytes_processed = localStorage.bytes_processed
}
},
mounted() {
const ctx = this
window.addEventListener("message", function (e) {
if (typeof e.data.youtube_token !== 'undefined') {
console.log(e.data.youtube_token)
ctx.setYouTube(e.data.youtube_token)
ctx.youtube_ready = true
}
})
this.$gtag.pageview("/");
const ua = navigator.userAgent;
if (/(tablet|ipad|playbook|silk)|(android(?!.*mobi))/i.test(ua) || /Mobile|Android|iP(hone|od)|IEMobile|BlackBerry|Kindle|Silk-Accelerated|(hpw|web)OS|Opera M(obi|ini)/.test(ua)) {
alert('You must be on desktop to use this application!')
this.canRecord = false
this.$gtag.exception('mobile-device-attempt', {})
}
let that = this
if (Notification.permission !== 'denied' || Notification.permission === "default") {
try {
Notification.requestPermission().then(function(result) {
that.$gtag.event('accepted-notifications', {
'event_category' : 'Notifications',
'event_label' : 'Notification accepted'
})
console.log(result)
});
} catch (error) {
// Safari doesn't return a promise for requestPermissions and it
// throws a TypeError. It takes a callback as the first argument
// instead.
if (error instanceof TypeError) {
Notification.requestPermission((result) => {
that.$gtag.event('accepted-notifications', {
'event_category' : 'Notifications',
'event_label' : 'Notification accepted'
})
console.log(result)
});
} else {
this.$gtag.exception('notification-error', error)
throw error;
}
}
}
},
computed: {
...mapGetters(['getYoutube'])
},
async created () {
try {
if(localStorage.youtube_key != null) {
this.setYouTube(localStorage.youtube_key)
console.log(this.getBroadcasts())
this.youtube_ready = true
}
const registration = await navigator.serviceWorker.ready
const tags = await registration.periodicSync.getTags()
navigator.serviceWorker.addEventListener('message', event => {
this.bytes_processed = event.data
});
if (tags.includes('get-latest-stats')) {
// this.skipDownloadUseCache()
} else {
this.getBytes()
}
} catch (e) {
this.$gtag.exception('application-error', e)
this.getBytes()
}
}
}
</script>
<style>
#app {
font-family: Avenir, Helvetica, Arial, sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
text-align: center;
color: #2c3e50;
margin-top: 60px;
}
:picture-in-picture {
box-shadow: 0 0 0 5px red;
height: 500px;
width: 500px;
}
</style>
OAuth ScreenNow we can create a live stream!
We created the stream but now we need to send our packets via MPEG-DASH! In the next series, we create the dash service and send our packets to Youtube for ingestion! Be sure to like and share this article and subscribe to my Youtube channel! Also, be sure to check out the source code for the API and the PWA! Lastly, join the discord and connect with software engineers and entrepreneurs alike!