Posted on Leave a comment

SpeechKit: A Javascript Package For The Web Speech API (Speech Synthesis & Speech Recognition)

Speech Recognition & Speech Synthesis In The Browser With Web Speech API

Voice apps are now first-class citizens on the web thanks to the Speech Recognition and the Speech Synthesis interfaces which are a part of the bigger Web Speech API. Taken from the MDN docs

The Web Speech API makes web apps able to handle voice data. There are two components to this API:

  • Speech recognition is accessed via the SpeechRecognition interface, which provides the ability to recognize voice context from an audio input (normally via the device’s default speech recognition service) and respond appropriately. Generally you’ll use the interface’s constructor to create a new SpeechRecognition object, which has a number of event handlers available for detecting when speech is input through the device’s microphone. The SpeechGrammar interface represents a container for a particular set of grammar that your app should recognize. Grammar is defined using JSpeech Grammar Format (JSGF.)
  • Speech synthesis is accessed via the SpeechSynthesis interface, a text-to-speech component that allows programs to read out their text content (normally via the device’s default speech synthesizer.) Different voice types are represented by SpeechSynthesisVoice objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance objects. You can get these spoken by passing them to the SpeechSynthesis.speak() method.
Brief on Web Speech API from MDN

So basically with the Web Speech API you can work with voice data. You can make your apps speak to its users and you can run commands based on what your user speaks. This opens up a host of opportunities for voice-activated CLIENT-SIDE apps. I love building open-source software, so I decided to create an NPM package to work with the Web Speech API called SpeechKit and I couldn’t wait to share it with you! I suppose this is a continuation of Creating A Voice Powered Note App Using Web Speech

Simplifying The Process With SpeechKit

I decided starting this year I would contribute more to the open-source community and provide packages (primarily Javascript, PHP, and Rust) to the world to use. I use the Web Speech API a lot in my personal projects and so why not make it an NPM package? You can find the source code here.

Features

  • Speak Commands
  • Listen for voice commands
  • Add your own grammar
  • Transcribe words and output as file.
  • Generate SSML from text
npm install @mastashake08/speech-kit

Import

import SpeechKit from '@mastashake08/speech-kit'

Instantiate A New Instance

new SpeechKit(options)

listen()

Start listening for speech recognition.

stopListen()

Stop listening for speech recognition.

speak(text)

Use Speech Synthesis to speak text.

Param Type Description
text string Text to be spoken

getResultList() ⇒ SpeechRecognitionResultList

Get current SpeechRecognition resultsList.

Returns: SpeechRecognitionResultList – – List of Speech Recognition results

getText() ⇒ string

Return text

Returns: string – resultList as text string

getTextAsFile() ⇒ Blob

Return text file with results.

Returns: Blob – transcript

getTextAsJson() ⇒ object

Return text as JSON.

Returns: object – transcript

addGrammarFromUri()

Add grammar to the SpeechGrammarList from a URI.

Params: string uri – URI that contains grammar

addGrammarFromString()

Add grammar to the SpeechGrammarList from a Grammar String.

Params: string grammar – String containing grammar

getGrammarList() ⇒ SpeechGrammarList

Return current SpeechGrammarList.

Returns: SpeechGrammarList – current SpeechGrammarList object

getRecognition() ⇒ SpeechRecognition

Return the urrent SpeechRecognition object.

Returns: SpeechRecognition – current SpeechRecognition object

getSynth() ⇒ SpeechSynthesis

Return the current Speech Synthesis object.

Returns: SpeechSynthesis – current instance of Speech Synthesis object

getVoices() ⇒ Array<SpeechSynthesisVoice>

Return the current voices available to the user.

Returns: Array<SpeechSynthesisVoice> – Array of available Speech Synthesis Voices

setSpeechText()

Set the SpeechSynthesisUtterance object with the text that is meant to be spoken.

Params: string text – Text to be spoken

setSpeechVoice()

Set the SpeechSynthesisVoice object with the desired voice.

Params: SpeechSynthesisVoice voice – Voice to be spoken

getCurrentVoice() ⇒ SpeechSynthesisVoice

Return the current voice being used in the utterance.

Returns: SpeechSynthesisVoice – current voice

Example Application

In this example vue.js application there will be a text box with three buttons underneath, when the user clicks the listen button, SpeechKit will start listening to the user. As speech is detected, the text will appear in the text box. The first button under the textbox will tell the browser to share the page, the second button will speak the text in the textbox while the third button will control recording.

Home page from the github.io page

I created this in Vue.js and (for sake of time and laziness) I reused all of the defaul components and rewrote the HelloWorld component. So let’s get started by creating a new Vue application.

Creating The Application

Open up your terminal and input the following command to create a new vue application:

vue create speech-kit-demo

It doesn’t really matter what settings you choose, after you get that squared away, now it is time to add our dependecy.

Installing SpeechKit

Still inside your terminal we will add the SpeechKit dependency to our package.json file with the following command:

npm install @mastashake08/speech-kit

Now with that out of the way we can begin creating our component functionality.

Editing HelloWorld.vue

Open up your HelloWorld.vue file in your components/ folder and change it to look like this:

<template>
  <div class="hello">
    <h1>{{ msg }}</h1>
    <p>
      Simple demo to demonstrate the Web Speech API using the
      <a href="https://github.com/@mastashake08/speech-kit" target="_blank" rel="noopener">SpeechKit npm package</a>!
    </p>
    <textarea v-model="voiceText"/>
    <ul>
      <button @click="share" >Share</button>
      <button @click="speak">Speak</button>
      <button @click="listen" v-if="!isListen">Listen</button>
      <button @click="stopListen" v-else>Stop Listen</button>
    </ul>
  </div>
</template>

<script>
import SpeechKit from '@mastashake08/speech-kit'
export default {
  name: 'HelloWorld',
  props: {
    msg: String
  },
  mounted () {
    this.sk = new SpeechKit({rate: 0.85})
    document.addEventListener('onspeechkitresult', (e) => this.getText(e))
  },
  data () {
    return {
      voiceText: 'SPEAK ME',
      sk: {},
      isListen: false
    }
  },
  methods: {
    share () {
      const text = `Check out the SpeechKit Demo and speak this text! ${this.voiceText} ${document.URL}`
      try {
        if (!navigator.canShare) {
          this.clipBoard(text)
        } else {
          navigator.share({
            text: text,
            url: document.URL
          })
        }
      } catch (e) {
        this.clipBoard(text)
      }
    },
    async clipBoard (text) {
      const type = "text/plain";
      const blob = new Blob([text], { type });

      const data = [new window.ClipboardItem({ [type]: blob })];
      await navigator.clipboard.write(data)
      alert ('Text copied to clipboard')
    },
    speak () {
      this.sk.speak(this.voiceText)
    },
    listen () {
      this.sk.listen()
      this.isListen = !this.isListen
    },
    stopListen () {
      this.sk.stopListen()
      this.isListen = !this.isListen
    },
    getText (evt) {
      this.voiceText = evt.detail.transcript
    }
  }
}
</script>

<!-- Add "scoped" attribute to limit CSS to this component only -->
<style scoped>
h3 {
  margin: 40px 0 0;
}
ul {
  list-style-type: none;
  padding: 0;
}
li {
  display: inline-block;
  margin: 0 10px;
}
a {
  color: #42b983;
}
</style>

As you can see the almost all of the functionality is being offloaded to the SpeechKit library. You can see a live version of this at https://mastashake08.github.io/speech-kit-demo/ . In the mount() method we initialize our SpeechKit instance and add an event listener on the document to listen for the onspeechkitresult event emitted from the SpeechKit class which dispatches everytime there is an availble transcript from speech recognition. The listen() and stopListen() functions simply call the SpeechKit functions and toggle a boolean indicating recording is in process. Finally the share() function uses the Web Share API to share the URL if available, otherwise it defaults to using the Clipboard API and copying the text to the user’s clipboard for manual sharing.

Want To See More Tutorials?

Join my newsletter and get weekly updates from my blog delivered straight to your inbox.

Check The Shop!

Consider purchasing an item from the #CodeLife shop, all proceeds go towards our coding initiatives.

Open-source work is free to use but it is not free to develop. If you enjoy my content and would like to see more please consider becoming a sponsor on Github! Not only do you support me but you are funding tech programs for at risk youth in Louisville, Kentucky.

Posted on 1 Comment

Upgrading The Discord Twitter Bot To Use V2 API

close up photo of toy bot

Twitter Upgraded Their API & Broke My Bot!!

Imagine my frustration when I get dozens of DMs, emails, and other messages asking when I was going to upgrade my Discord Twitter bot to be compliant with the latest Twitter changes. Like damn bro, I have other things to do lol but alas I can’t let my peeps down. In this blog entry, I will show you what I did to upgrade my codebase to use the Twitter V2 API to communicate with the Discord server to send out my tweets.

v2 of the Discord Twitter bot

Upgrading The Package.json File

We are no longer using the Twit npm package and instead using the twitter-v2 npm package. Open your package.json file and change it to the following:

{
  "name": "discord-twitter-bot",
  "version": "1.0.0",
  "description": "A discord bot that sends messages to a channel whenever a specific user tweets.",
  "main": "main.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "repository": {
    "type": "git",
    "url": "git+https://github.com/mastashake08/discord-twitter-bot.git"
  },
  "keywords": [
    "discord",
    "twitter",
    "bot"
  ],
  "author": "mastashake08",
  "license": "ISC",
  "bugs": {
    "url": "https://github.com/mastashake08/discord-twitter-bot/issues"
  },
  "homepage": "https://github.com/mastashake08/discord-twitter-bot#readme",
  "dependencies": {
    "discord.js": "^13.8.1",
    "dotenv": "^8.2.0",
    "twitter-v2": "^1.1.0"
  },
  "engines" : {
    "npm" : ">=7.0.0",
    "node" : ">=16.0.0"
  }
}

Changes To The Twitter API

In order to use the stream API, we have to set up stream rules. We want to only show tweets from yourself so in your .env file add a new field

TWITTER_USER_NAME=

Afterward, we listen to the stream pretty much as before. Open up the main.js file and update it to the following.

require('dotenv').config()
const Twit = require('twitter-v2')
const { Client } = require('discord.js');
const client = new Client({ intents: 2048 });


var T = new Twit({
  // consumer_key:         process.env.TWITTER_CONSUMER_KEY,
  // consumer_secret:      process.env.TWITTER_CONSUMER_SECRET,
  // access_token_key:         process.env.TWITTER_ACCESS_TOKEN,
  // access_token_secret:  process.env.TWITTER_ACCESS_TOKEN_SECRET,
  // timeout_ms:           60*1000,  // optional HTTP request timeout to apply to all requests.
  // strictSSL:            true,     // optional - requires SSL certificates to be valid.
  bearer_token:  process.env.BEARER_TOKEN
})

//   //only show owner tweets
async function sendMessage (tweet, client){
  const url = "https://twitter.com/user/status/" + tweet.id;
  try {
    const channel = await client.channels.fetch(process.env.DISCORD_CHANNEL_ID)
    channel.send(url)
  } catch (error) {
        console.error(error);
  }
}

async function listenForever(streamFactory, dataConsumer) {
  try {
    for await (const { data } of streamFactory()) {
      dataConsumer(data);
    }
    // The stream has been closed by Twitter. It is usually safe to reconnect.
    console.log('Stream disconnected healthily. Reconnecting.');
    listenForever(streamFactory, dataConsumer);
  } catch (error) {
    // An error occurred so we reconnect to the stream. Note that we should
    // probably have retry logic here to prevent reconnection after a number of
    // closely timed failures (may indicate a problem that is not downstream).
    console.warn('Stream disconnected with error. Retrying.', error);
    listenForever(streamFactory, dataConsumer);
  }
}

async function  setup () {
  const endpointParameters = {
      'tweet.fields': [ 'author_id', 'conversation_id' ],
      'expansions': [ 'author_id', 'referenced_tweets.id' ],
      'media.fields': [ 'url' ]
  }
  try {
    console.log('Setting up Twitter....')
    const body = {
      "add": [
        {"value": "from:"+ process.env.TWITTER_USER_NAME, "tag": "from Me!!"}
      ]
    }
    const r = await T.post("tweets/search/stream/rules", body);
    console.log(r);

  } catch (err) {
    console.log(err)
  }

  listenForever(
    () => T.stream('tweets/search/stream', endpointParameters),
    (data) => sendMessage(data, client)
  );
}
// Add above rule

client.login(process.env.DISCORD_TOKEN)
 client.on('ready', () => {
   console.log('Discord ready')
   setup()

 })

Congrats, It’s Updated!

That’s pretty much all we had to do to update everything to use the new API. The added benefit is that it won’t show retweets in your discord server like before :0 if you enjoyed this consider becoming a patron on Patreon and help fund in-person coding classes for kids in Louisville, KY!

Posted on Leave a comment

Adding Google Drive Functionality To Screen Recorder Pro

You All Requested Google Drive Functionality!

In my last YouTube video, I was asked to implement Google Drive upload functionality for saving screen recordings. I thought this was a marvelous idea and immediately got to work! We already added OAuth login via Google and Laravel in the last tutorial to interact with the Youtube Data v3 API, so with a few simple backend tweaks, we can add Google Drive as well!

Steps To Accomplish

The functionality I want to add to this is going to be just uploading to Google Drive, with no editing or listing. Keep things simple! This is going to require the following steps

  • Adding Google Drive scopes to Laravel Socialite
  • Create a function to upload file to Google API endpoint

Pretty easy if I do say so myself. Let’s get started with the backend.

Adding Google Drive Scopes To Laravel Socialite

We already added scopes for YouTube in the last tutorial so thankfully not a whole lot of work is needed to add Google Drive scopes. Open up your routes/api.php file and update the scopes array to include the new scopes needed to interact with Google Drive

Route::get('/login/youtube', function (Request $request) {
  return Socialite::driver('youtube')->scopes(['https://www.googleapis.com/auth/youtube', 'https://www.googleapis.com/auth/youtube.upload', 'https://www.googleapis.com/auth/youtube.readonly', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.metadata', 'https://www.googleapis.com/auth/drive.metadata.readonly'])->stateless()->redirect();
});

Make sure you enable the API in the Google cloud console! Now we head over to the frontend Vue application and let’s add our markup and functions.

Open the Home.vue and we are going to add a button in our list of actions for uploading to Google Drive

<t-button v-on:click="uploadToDrive" v-if="uploadReady" class="ml-10">Upload To Drive 🗄️</t-button>
    

In the methods add a function called uploadToDrive() inside put the following

  async uploadToDrive () {
      let metadata = {
          'name': 'Screen Recorder Pro - ' + new Date(), // Filename at Google Drive
          'mimeType': 'application/zip', // mimeType at Google Drive
      }
      let form = new FormData();
      form.append('metadata', new Blob([JSON.stringify(metadata)], {type: 'application/json'}));
      form.append('file', this.file);
      await fetch('https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart', {
        method: 'POST', // *GET, POST, PUT, DELETE, etc.
        mode: 'cors', // no-cors, *cors, same-origin
        cache: 'no-cache',
        headers: {
          'Content-Length': this.file.length,
          Authorization: `Bearer ${this.yt_token}`
        },
        body: form
      })
      alert('Video uploaded to Google Drive!')
    }

Inside this function we create an HTTP POST request to the Google Drive endpoint for uploading files. We pass a FormData object that contains some metadata about the object and the actual file itself. After the file is uploaded the user is alerted that their video is stored!

Screen Recorder Pro Google Drive upload confirmation

What’s Next?

Next, we will add cloud storage you will be able to share with Amazon S3 and WebShare API! Finally we will add monetization and this project will be wrapped up! If you enjoyed this please give the app a try at https://recorder.jcompsolu.com

Posted on Leave a comment

Create A WebRTC Google Meet Clone In Vue.js Pt. 1

Google Meet Clone Written In Vue

In this tutorial series, we will be building a WebRTC Google Meet clone using Vue.js. All of the source code is free and available on Github. If you found this tutorial to be helpful and want to help keep this site free for others consider becoming a patron! The application will allow you to join a room by ID. Anyone who joins that room @ that ID will instantly join the call. In this first iteration, we can share voice, video, and screens!

Setting Up The Vue Application

Let’s go ahead and create the Vue application and add our dependency for WebRTC vue-webrtc. This dependency adds all of the functionality we need in a simple web component!

vue create google-meet-clone; cd google-meet-clone; npm install --save vue-webrtc

All of the functionality is built in the App.vue page (for now) let’s open it up and add the following:

web rtc google meet clone!
<template>
  <div id="app">
    <img alt="Vue logo" src="./assets/logo.png">
    <vue-webrtc width="100%" :roomId="roomId" ref="webrtc" v-on:share-started="shareStarted"  v-on:share-stopped="leftRoom" v-on:left-room="leftRoom" v-on:joined-room="joinedRoom"/>
    <input v-model="roomId" placeholder="Enter room ID"/>
    <button @click="toggleRoom">{{hasJoined ? 'Leave Room' : 'Join Room'}}</button>
    <button @click="screenShare" v-if="hasJoined">Screen Share</button>
  </div>
</template>

<script>
export default {
  name: 'App',
  data () {
    return {
      roomId: 'roomId',
      hasJoined: false,
      userStream: null
    }
  },
  mounted () {},
  methods: {
    async toggleRoom () {
      try {
        if(this.hasJoined) {
          this.$refs.webrtc.leave()
          this.hasJoined = false
        } else {
          await this.$refs.webrtc.join()
          this.userStream = this.$refs.webrtc.videoList[0].stream
          this.hasJoined = true
        }
      } catch (e) {
        console.log(e)
      }

    },
    screenShare () {
      this.$refs.webrtc.shareScreen()
    },
    joinedRoom (streamId) {
      console.log(streamId)
    },
    shareStarted (streamId) {
      console.log(streamId)
    },
    leftRoom (streamId) {
      console.log(streamId)
    }
  }
}
</script>

<style>
#app {
  font-family: Avenir, Helvetica, Arial, sans-serif;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
  text-align: center;
  color: #2c3e50;
  margin-top: 60px;
}
</style>

The screen has a text field for putting the roomID which will be used by the vue-webrtc component to connect to a room. We have some events we listen to, which we will do more with in later tutorials. For now there are two buttons, one for connecting /leaving the room and one for sharing your screen. This is it! The package handles everything else and you can test it out here. In the next series we will implement recording functionality so everyone can download the meetings! If you enjoyed this please like and share this blog and subscribe to my YouTube page! In the meantime while you wait check out my screen recorder app tutorial!

Posted on Leave a comment

Creating A Screen Recorder and Email Microservice With Vue.js + MediaRecorder API and Laravel PHP Framework

Recording Your Screen With Vue.js and MediaRecorder API

Last year I wrote a screen recording progressive web app with Vue.js and the MediaRecorder API. This was a simple app that allowed you to record your current screen and after screen sharing, a file would be created with the File API and downloaded to your system. Well I decided to update it this week and add email functionality. The reason? I needed to send a screen recording to a client and figured might as well add the functionality in the app and save time; as opposed to downloading the file then opening Gmail, then sending the email. Here is a video for the first part.

Screen recorder part 1

Adding The Email Service

Obviously, you all know I love Laravel! I decided to create a Laravel 8 API microservice with a single post route that takes the video file and email address and sends a notification to said email address. I then had to edit the Vue application to make a network call to the microservice when the user wants to email the file.

Screen recorder part 2

Getting To The Code!

Let’s start off with the Vue.js application. Create a new application in your terminal

vue create screen-recorder

The first thing we are going to do is add our dependencies, which in this case is vue-tailwind for ease of working with TailwindCSS, gtag for working with Google Analytics ( I like to know where my users are coming from), Google Adsense ( a brother gotta eat) and vue-script2.

cd screen-recorder; npm install --save vue-tailwind vue-script2 vue-gtag vue-google-adsense

After installing the dependencies, head over to main.js and let’s setup the application

import Vue from 'vue'
import App from './App.vue'
import VueTailwind from 'vue-tailwind'
import Ads from 'vue-google-adsense'
import VueGtag from "vue-gtag";
import "tailwindcss/tailwind.css"
Vue.use(VueGtag, {
  config: { id: "your google analytics id" }
});

Vue.use(require('vue-script2'))

Vue.use(Ads.Adsense)
const settings = {
  TInput: {
    classes: 'form-input border-2 text-gray-700',
    variants: {
      error: 'form-input border-2 border-red-300 bg-red-100',
      // ... Infinite variants
    }
  },
TButton: {
    classes: 'rounded-lg border block inline-flex items-center justify-center block px-4 py-2 transition duration-100 ease-in-out focus:border-blue-500 focus:ring-2 focus:ring-blue-500 focus:outline-none focus:ring-opacity-50 disabled:opacity-50 disabled:cursor-not-allowed',
    variants: {
      secondary: 'rounded-lg border block inline-flex items-center justify-center bg-purple-500 border-purple-500 hover:bg-purple-600 hover:border-purple-600',
    }
  },
  TAlert: {
    classes: {
      wrapper: 'rounded bg-blue-100 p-4 flex text-sm border-l-4 border-blue-500',
      body: 'flex-grow text-blue-700',
      close: 'text-blue-700 hover:text-blue-500 hover:bg-blue-200 ml-4 rounded',
      closeIcon: 'h-5 w-5 fill-current'
    },
    variants: {
      danger: {
        wrapper: 'rounded bg-red-100 p-4 flex text-sm border-l-4 border-red-500',
        body: 'flex-grow text-red-700',
        close: 'text-red-700 hover:text-red-500 hover:bg-red-200 ml-4 rounded'
      },
      // ... Infinite variants
    }
  },
  // ... The rest of the components
}

Vue.use(VueTailwind, settings)
Vue.config.productionTip = false

new Vue({
  render: h => h(App),
}).$mount('#app')

This file basically bootstraps the application with all the Google stuff and the Tailwind CSS packaging. Now let’s open up the App.vue and replace with the following:

<template>
  <div id="app">
    <img alt="J Computer Solutions Logo" src="./assets/logo.png" class="object-contain h-48 w-full">
    <p>
    Record your screen and save the file as a video.
    Perfect for screen recording for clients. Completely client side app and is installable as a PWA!
    </p>
    <p>
    Currently full system audio is only available in Windows and Chrome OS.
    In Linux and MacOS only chrome tabs are shared.
    </p>
    <t-modal
      header="Email Recording"
      ref="modal"
    >
  <t-input v-model="sendEmail" placeholder="Email Address" name="send-email" />
  <template v-slot:footer>
    <div class="flex justify-between">
      <t-button type="button" @click="$refs.modal.hide()">
        Cancel
      </t-button>
      <t-button type="button" @click="emailFile">
        Send File
      </t-button>
    </div>
  </template>
</t-modal>
<div class="mt-5">
    <t-button v-on:click="getStream" v-if="!isRecording"> Start Recording 🎥</t-button>
    <t-button v-on:click="stopStream" v-else> Stop Screen Recording ❌ </t-button>
    <t-button v-on:click="download" v-if="fileReady" class="ml-10"> Download Recording 🎬</t-button>
    <t-button  v-on:click="$refs.modal.show()" v-if="fileReady" class="ml-10"> Email Recording 📧</t-button>
</div>
    <br>
    <Adsense
      data-ad-client="ca-pub-xxxxxxxxxx"
      data-ad-slot="xxxxxxx">
    </Adsense>
  </div>
</template>

<script>

export default {
  name: 'App',
  data() {
    return {
      isRecording: false,
      options: {
        audioBitsPerSecond: 128000,
        videoBitsPerSecond: 2500000,
        mimeType: 'video/webm'
      },
      displayOptions: {
      video: {
        cursor: "always"
      },
      audio: {
          echoCancellation: true,
          noiseSuppression: true,
          sampleRate: 44100
        }
      },
      mediaRecorder: {},
      stream: {},
      recordedChunks: [],
      file: null,
      fileReady: false,
      sendEmail: '',
      url: 'https://screen-recorder-micro.jcompsolu.com'
    }
  },
  methods: {
    async emailFile () {
      try {
        const fd = new FormData();
        fd.append('video', this.file)
        fd.append('email', this.sendEmail)
        await fetch(`${this.url}/api/email-file`, {
          method: 'post',
          body: fd
        })
      this.$refs.modal.hide()
      this.showNotification()
      } catch (err) {
        alert(err.message)
      }
    },
    setFile (){
      this.file = new Blob(this.recordedChunks, {
        type: "video/webm"
      });
      this.fileReady = true
    },
    download: function(){
      this.$gtag.event('download-stream', {})


    var url = URL.createObjectURL(this.file);
    var a = document.createElement("a");
    document.body.appendChild(a);
    a.style = "display: none";
    a.href = url;
    var d = new Date();
    var n = d.toUTCString();
    a.download = n+".webm";
    a.click();
    window.URL.revokeObjectURL(url);
    this.recordedChunks = []
    this.showNotification()
    },
    showNotification: function() {
      var img = '/logo.png';
      var text = 'If you enjoyed this product consider donating!';
      navigator.serviceWorker.getRegistration().then(function(reg) {
        reg.showNotification('Screen Recorder', { body: text, icon: img, requireInteraction: true,
        actions: [
            {action: 'donate', title: 'Donate',icon: 'logo.png'},
            {action: 'close', title: 'Close',icon: 'logo.png'}
            ]
              });
      });
    },
    handleDataAvailable: function(event) {
      if (event.data.size > 0) {
        this.recordedChunks.push(event.data);
        this.isRecording = false
        this.setFile()
      } else {
        // ...
      }
    },
    stopStream: function() {
      this.$gtag.event('stream-stop', {})
      this.mediaRecorder.stop()
      this.mediaRecorder = null
      this.stream.getTracks()
      .forEach(track => track.stop())

    },
    getStream: async function() {
    try {
        this.stream =  await navigator.mediaDevices.getDisplayMedia(this.displayOptions);
        this.mediaRecorder = new MediaRecorder(this.stream, this.options);
        this.mediaRecorder.ondataavailable = this.handleDataAvailable;
        this.mediaRecorder.start();
        this.isRecording = true
        this.$gtag.event('stream-start', {})
      } catch(err) {
        this.isRecording = false
        this.$gtag.event('stream-stop', {})
        alert(err);
      }
    }
  },
  mounted() {

    let that = this
    Notification.requestPermission().then(function(result) {
      that.$gtag.event('accepted-notifications', { result: result })
    });
  }
}
</script>

<style>
#app {
  font-family: Avenir, Helvetica, Arial, sans-serif;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
  text-align: center;
  color: #2c3e50;
  margin-top: 60px;
}
</style>

Laravel API

Start off by creating a new Laravel application. My setup uses Docker and MacOS

curl -s "https://laravel.build/screen-recorder-api" | bash

The first thing we want to do is create our File model and migration. The File model will hold the name, mime_type and size of the file along with the email where the file is to be sent. Note! We are NOT storing the file, simply passing it through to the email.

cd screen-recorder-api; ./vendor/bin/sail up -d; ./vendor/bin/sail artisan make:model -m File

Open up the app/Models/File.php file and replace the contents with the following:

<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Notifications\Notifiable;
class File extends Model
{
    use HasFactory, Notifiable;
    public $guarded = [];
}

Now open up the migration file and edit it to be the following:

<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

class CreateFilesTable extends Migration
{
    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('files', function (Blueprint $table) {
            $table->id();
            $table->string('name');
            $table->string('email');
            $table->string('size');
            $table->string('mime_type');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::dropIfExists('files');
    }
}

Now let’s create a new notification called SendFile. This notification will send an email with the file attached to it to the user. Let’s create the notification and fill out the contents!

./vendor/bin/sail artisan make:migration SendFile
<?php

namespace App\Notifications;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Notifications\Messages\MailMessage;
use Illuminate\Notifications\Notification;

class SendFile extends Notification
{
    use Queueable;
    public $file;
    /**
     * Create a new notification instance.
     *
     * @return void
     */
    public function __construct($file)
    {
        //
        $this->file = $file;
    }

    /**
     * Get the notification's delivery channels.
     *
     * @param  mixed  $notifiable
     * @return array
     */
    public function via($notifiable)
    {
        return ['mail'];
    }

    /**
     * Get the mail representation of the notification.
     *
     * @param  mixed  $notifiable
     * @return \Illuminate\Notifications\Messages\MailMessage
     */
    public function toMail($notifiable)
    {
        return (new MailMessage)
                    ->line('Your Screen Recording')
                    ->line('Thank you for using our application!')
                    ->attach($this->file, ['as' => 'jcompsolu-screen-record.webm', 'mime' => 'video/webm']);
    }

    /**
     * Get the array representation of the notification.
     *
     * @param  mixed  $notifiable
     * @return array
     */
    public function toArray($notifiable)
    {
        return [
            //
        ];
    }
}

You will notice we set the file in the constructor then attach it using the attach() method on the MailMessage object. Now that is done let’s create the API route, and send our notifications! Open up routes/api.php and edit it to be so:

<?php

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;
use App\Models\File;
use App\Notifications\SendFile;
/*
|--------------------------------------------------------------------------
| API Routes
|--------------------------------------------------------------------------
|
| Here is where you can register API routes for your application. These
| routes are loaded by the RouteServiceProvider within a group which
| is assigned the "api" middleware group. Enjoy building your API!
|
*/

Route::middleware('auth:sanctum')->get('/user', function (Request $request) {
    return $request->user();
});

Route::post('/email-file', function (Request $request) {
  $uploadedFile = $request->video;
  $file = File::Create([
    'name' => $uploadedFile->getClientOriginalName(),
    'mime_type' => $uploadedFile->getClientMimeType(),
    'size' => $uploadedFile->getSize(),
    'email' => $request->email
  ]);
  $file->notify(new SendFile($uploadedFile));
  return response()->json($file);
});

When you upload a file in Laravel it is an instance of UploadedFile class and has several file related methods associated with it! Using these methods we can get the name, size and mimetype of the uploaded file! After setting the model and saving in the database we send a notification with the uploaded file! Test it yourself here!

Conclusion

The vast majority of the apps I create and monetize, start off as an app that I use myself to make my life or work easier! This is the basis of #CodeLife and is the reason I was able to retire early for a few years. If this tutorial helped you please consider subscribing to my Youtube channel and subscribing to the blog and leave a comment if you want me to add new functionality!

Posted on Leave a comment

Creating A Twitter Follow Bot With Node and Twit.js

Automate Your Following With A Twitter Follow Bot

Anyone who follows me on Twitter (if you don’t @mastashake08) knows that I’m pretty active. Currently I’m on my way to 10K followers but sometimes my TL looks kinda dry. One of the best things about Twitter is that I learn alot of new information from the people I follow. Yet I don’t have the time to actively look for new people. Time for a #CodeLife trick.

Filtering Statuses With Twit

If you read my article on Creating a Discord Twitter Bot then alot of this code will look familiar to you. I’m going to stream a list of filtered statuses that use the hashtags #BlackTechTwitter and #CodeLife and whoever sends those tweets I will automatically follow them. So let’s begin by creating a new directory and adding our dependecies

mkdir follow-bot
cd follow-bot && npm install dotenv twit
touch search-follow.js

This will create a new directory, cd into it and install the dotenv and twit dependencies. Dotenv allows us to use a .env file to hold our secret values for our Twitter creditenials safely and twit is the Twitter javascript library. Lastly we created an empty js file where we will hold our code. Open it up and input the following.

require('dotenv').config()
const Twit = require('twit')


var T = new Twit({
  consumer_key:         process.env.TWITTER_CONSUMER_KEY,
  consumer_secret:      process.env.TWITTER_CONSUMER_SECRET,
  access_token:         process.env.TWITTER_ACCESS_TOKEN,
  access_token_secret:  process.env.TWITTER_ACCESS_TOKEN_SECRET,
  timeout_ms:           60*1000,  // optional HTTP request timeout to apply to all requests.
  strictSSL:            true,     // optional - requires SSL certificates to be valid.
})
var stream = T.stream('statuses/filter', { track: ['#blacktechtwitter', '#codelife'], language: 'en' })

stream.on('tweet', function (tweet) {
  //...
  var user = tweet.user;
  try {
    T.post('friendships/create', {screen_name: user.screen_name})
    console.log('Followed ' + user.screen_name)
  } catch (error) {
    console.log(error)
    }
  })

The first lines we are requiring our dependencies and after we initialize the Twit object with out Twitter API creds stored in .env

Next we set a stream variable that will hold our filtered statuses tracking the list of hashtags.

Since the stream is event-driven we listen for the ‘tweet’ event which holds our Tweet object. We grab the screen_name of the user who tweeted and make a request to the ‘friendships/create’ endpoint which is what creates the following!

See how simple that was! If you enjoyed this article consider becoming a patron to get exclusive content! Get the source code here.

Posted on Leave a comment

WebTransport Is A Game Changer For Gaming

If You Haven’t Heard Of WebTransport

It is a new standard that provides bidirectional data transport over HTTP/3. In many cases, it can replace WebSocket and WebRTC with less overhead. It has two APIs that come with it, one for sending data unreliably with Datagram API and one reliably with the Stream API. In this article I will explain what WebTransport is, and how it will affect web gaming! If you want access to my member’s only article where I build a HTTP/3 server in Go and WebTransport client become a patron today.

Why Is WebTransport Such A Big Deal?

WebTransport is built on top of HTTP/3 which means it runs over QUIC. Without going into too much technical detail this equates to lower overhead and faster more reliable connections. It is also bidirectional meaning you can read and write data to the server. The cool thing to me though is how you can send data reliably (with streams similar to websockets) AND unreliably via datagrams!

Imagine you are making a multiplayer shooting game with 64 players. In coding terms you need this data to come as fast as possible right? Well at first glance you might think that a reliable data stream would be the best right? After all if all the players are shooting one character you want the damage to come in order right? WRONG! You do that and you are at the behest of the slowest network in the game. By sending datagrams you get best effort delivery (let’s be honest only ~1-5% of traffic is lost in these types of connections) so you won’t block the other connections.

Since WebTransport is Client-Server this is a no brainer for real-time multiplayer web games! Low latency is a must!

What’s Next?

WebTransport is still in it’s early stages but I will be following up with a YouTube video and a follow up blog where I will build a WebTransport server and client application! If you aren’t following me on Twitter do so @mastashake08

Posted on Leave a comment

Are You A Front End Web Developer? Congrats, You Are Also An Android Developer.

front end web developer to android developer

Mobile Development and Web Development Are Merging

Progressive web apps (PWAs) are pretty mainstream now and most web developers are aware of their existence. The Android platform and the Google Play store has been around for going on 13 years and developers are aware of its existence. What many are unaware of is how much the two platforms have been merging in the background. Last year I had the most wonderful epiphany. If you are a front end web developer, then by default you can publish Android applications, making you an Android developer as well.

Subscribe to my YouTube for more Tech Talks

What Are The Benefits?

You might be wondering what the benefits of turning your PWA into an installable Android application. Some of the main ones include:

  • Increased visibility
  • Increased revenue
  • Better brand reputation

Increased Visibility

When you turn your progressive web application into an Android application and upload it to the Google Play Store, you are opening yourself to a new realm of SEO called App Store Optimization. This is the search engine that powers the Google Play Store search. Your app will now be available to all Android users (unless otherwise specified). Your app store listing is the gateway to all of these new potential users.

Increased Revenue

When you upload an Android application you can set it to be paid or free. In addition to increased ad revenue (if that is your monetization model) that will grow proportionally with app download and usage; but what about apps that don’t make any money on the web? Those could be one-time paid downloads on the Google Play Store.

Better Brand Reputation

Just by turning your PWA into an Android app, you are increasing your brand reputation! Users trust brands that are on multiple platforms and see it as more established, whether or not this is true for that brand is debatable. Definitely worth the $25 to get a Google developer license.

PWAs and WebAPKs

When Google added PWA support to Chrome on Android they added this cool feature called WebAPK. When a user clicks the “Add To Homescreen” button on the mobile browser, the Android operating system actually creates a special APK on the fly, sign and install it. This feature is powerful and can lead to easy accessibility for web apps now and in the future. When I first heard of this, I immediately went to work converting all of my web applications into progressive web applications. Thinking this is the pinnacle I told myself I bought my Android developer license for nothing I will just push installs this way; but then I found something better.

Android Trusted Web Activities

Android now has this cool new way of working with your PWA inside of your app called Trusted Web Activities. TWAs have a lot of benefits but my top two are:

  1. Content in a Trusted Web activity is trusted — the app and the site it opens are expected to come from the same developer. (This is verified using Digital Asset Links.)
  2. The content rendered in a Trusted Web Activity comes from the web: they’re rendered by the user’s browser, in exactly the same way as a user would see it in their browser except they are run fullscreen. Web content should be accessible and useful in the browser first.

Let’s say you already built your PWA and you need an Android application but you need to do some extra things that are beyond the current scope of web apis but everything else is in the PWA. Using Trusted Web Activites you can interact with your application and the native APIs provided by Android. The PWA has to come from the same developer who is creating the Android application and is done using Digital Asset Links. This is a file proves you are the owner and once you upload this file to your server, Google will verify. This ensures security and that you aren’t ripping off someone eles’s PWA for your own profit. Also by uploading an Android app that is based on your PWA you don’t have to worry about updates (as much). Once you update the PWA, your application will reflect those updates, thus reducing code time.

But wait….doesn’t that require me to first write an Android app that calls the TWA?

Yes however AUTOMATION BABY! There are tools that I use that make generate the source code and the APK so all I have to do is upload it to the Google Play Console as a new app. I never touch any Java/Kotlin code. I created a course that shows you how to take ANY non PWA web application and turn it into a PWA and APK in under 30 minutes! Expand your visibility and earn more income by diversfying your platforms!

Posted on Leave a comment

Create An Online Radio & Podcast Streamer Using Vue and Media Session API

The Media Session API

I love listening to podcasts and online radio. I used to run an online station a fews ago called 90K Radio and I was hooked in the community ever since. Keeping on track with my PWA binge I thought it would be a cool to write a progressive web app that can take in any stream URL and play it. Not only that but I want to be able to control the audio using the native audio commands on Android, iOS and desktop. There is this awesome javascript API called the Media Session API. The Media Session API allows you to customize media notifications and how to handle them. It allows you to control your media without having to be on that specific webpage. This allows for things such as background playing ( a must need feature for online radio PWA). You can even do things like set album artwork and other metadata and have custom handlers for events such as skipping tracks and pausing/playing.

What Are The Benefits?

Primarily I built this because a PWA will load faster. Also no tracking, I don’t have to worry about Google or anyone else tracking my activity, I can just listen in peace. By using the media session API I can listen in the background whilst doing other things which most likely will be every time I use the app. Lastly it’s just an awesome feeling to use your own software 😅.

The Vue Application

I created the vue application using the standard Vue CLI and added Vuetify so that it has some basic responsive styling. The app has one component called Radio.vue which holds all of the logic. The application has some preset radio stations that I can click as well as a text field where I can put in any URL of their choosing for play. It also grabs an RSS feed for a few of my favorite podcasts so I can quickly listen. Everything is done client side including the RSS XML parsing! You can view the live version here and clone the repo here.

Let’s Get Coding

As I stated above, I created a new Vue application using the vue-cli and added vuetify using the vue add vuetify command. For brevity I will skip that part and only talk about the Radio.vue component which holds all of the logic. This component will grab the preset stations and turn those into buttons. The favorited podcast RSS feeds it will grab, parse the XML and play said podcast. There is a URL text input that the user can manually put an audio stream URL in. Finally I set the Media Session metadata to show the cover art and info of whatever is playing and if I don’t have it show a default image, artist and album.

<template>
  <v-container>
    <v-row class="text-center">
      <v-col cols="12">
        <v-img
          :src="require('../assets/logo.png')"
          class="my-3"
          contain
          height="200"
        />
      </v-col>

      <v-col class="mb-4">
        <h1 class="display-2 font-weight-bold mb-3">
          Welcome to PWA Radio
        </h1>

        <p class="subheading font-weight-regular text-center">
          <v-text-field type="url" placeholder="Enter stream URL" v-model="url" label="Stream URL" />
        </p>
        <v-row class="text-center">
          <v-btn v-on:click="playAudio" v-if="!isPlaying">Play</v-btn>
          <v-btn v-on:click="stopAudio" v-else color="red">Stop</v-btn>
        </v-row>
      </v-col>
    </v-row>
    <v-row class="text-center">
      <v-btn class="pa-md-4 mx-lg-auto" v-for="x in presets" v-on:click="setAudio(x)" :key="x.name" :color="x.color"> {{x.name}} </v-btn>
    </v-row>
    <v-row class="text-center">
      <v-select
          v-model="currentPodcast"
          :hint="`${currentPodcast.name}, ${currentPodcast.author}`"
          :items="favoritePodcasts"
          item-text="name"
          item-value="url"
          label="Favorite Podcasts"
          persistent-hint
          return-object
          single-line
          @change="playPodcast"
        ></v-select>
    </v-row>
  </v-container>
</template>

<script>
  export default {
    name: 'Radio',

    data: () => ({
      isPlaying: false,
      audio: {},
      url : '',
      currentPodcast: {},
      selectedEpisode: {},
      presets : [
        {
          name: 'WEKU-NPR',
          url : 'https://playerservices.streamtheworld.com/api/livestream-redirect/WEKUFM.mp3',
          color: "green",
          author: 'NPR'
        },
        {
          name: 'WEKU-Classical',
          url: 'https://playerservices.streamtheworld.com/api/livestream-redirect/WEKUHD2.mp3',
          color: 'orange',
          author: 'NPR'
        },
        {
          name: 'Vocalo Radio',
          url: 'https://stream.wbez.org/vocalo128',
          color: 'blue',
          author: 'NPR'
        },
        {
          name: 'WFPK',
          url: 'https://lpm.streamguys1.com/wfpk-popup',
          color: 'yellow',
          author: 'NPR'
        },
        {
          name: 'KEXP',
          url: 'https://kexp-mp3-128.streamguys1.com/kexp128.mp3?listenerid=8044407b7410ad01f8210fd508279708&awparams=companionAds%3Atrue',
          color: '#cb349a',
          author: 'NPR'
        }
      ],
      favoritePodcasts: [],

      podcastURLS: [
        { url: 'https://anchor.fm/s/fdc3ac0/podcast/rss', name: 'Code Life' },
        { url: 'https://anchor.fm/s/42d5fca4/podcast/rss' , name: 'Intimate Spaces' },
        { url: 'https://feeds.npr.org/510289/podcast.xml', name: 'Project Money'}

      ]
    }),
    methods: {
      setMediaControls: function () {
        if ('mediaSession' in navigator) {
          navigator.mediaSession.metadata = new window.MediaMetadata({
            title: 'Pocket Radio',
            artist: 'J Computer Solutions LLC',
            album: 'Pocket Radio',
            artwork: [
              { src: 'https://radio.jcompsolu.com/images/logo-96.png',   sizes: '96x96',   type: 'image/png' },
              { src: 'https://radio.jcompsolu.com/images/logo-128.png', sizes: '128x128', type: 'image/png' },
              { src: 'https://radio.jcompsolu.com/images/logo-192.png', sizes: '192x192', type: 'image/png' },
              { src: 'https://radio.jcompsolu.com/images/logo-256.png', sizes: '256x256', type: 'image/png' },
              { src: 'https://radio.jcompsolu.com/images/logo-384.png', sizes: '384x384', type: 'image/png' },
              { src: 'https://radio.jcompsolu.com/images/logo-512.png', sizes: '512x512', type: 'image/png' },
            ]
          });

          navigator.mediaSession.setActionHandler('play', this.playAudio());
          navigator.mediaSession.setActionHandler('pause', this.pauseAudio());
          navigator.mediaSession.setActionHandler('stop', this.stopAudio());
          navigator.mediaSession.setActionHandler('seekbackward', function() { /* Code excerpted. */ });
          navigator.mediaSession.setActionHandler('seekforward', function() { /* Code excerpted. */ });
          navigator.mediaSession.setActionHandler('seekto', function() { /* Code excerpted. */ });
          navigator.mediaSession.setActionHandler('previoustrack', function() { /* Code excerpted. */ });
          navigator.mediaSession.setActionHandler('nexttrack', function() { /* Code excerpted. */ });
        }
      },
      playPodcast: function () {
        this.setAudio(this.currentPodcast)
        this.playAudio()
      },
      playAudio: function () {
        if(this.isPlaying){
          this.isPlaying = false
          this.audio.pause()
          this.audio = {}
        }
        this.audio = new Audio(this.url)
        this.isPlaying = true
        this.audio.play()
          .then(()=> {
        }).catch(error => { console.log(error) });
      },
      pauseAudio: function () {
        this.audio.pause()
        this.isPlaying = false
      },
      stopAudio: function () {
        this.audio.pause()
        this.audio = {}
        this.isPlaying = false
      },
      setAudio: function(preset) {
        this.url = preset.url
        navigator.mediaSession.metadata.title = preset.name
        navigator.mediaSession.metadata.artist = preset.author
        if(preset.image) {
          navigator.mediaSession.metadata.artwork = [
            { src: preset.image }
          ]
        }
        this.playAudio()
      }
    },
    mounted () {
      this.setMediaControls()
    },
    created () {
      this.podcastURLS.forEach(pod => {
        fetch(pod.url)
        .then(response => response.text())
        .then(str => new window.DOMParser().parseFromString(str, "text/xml"))
        .then(data => {
          const items = data.querySelectorAll("item");
          for (let i = 0; i < items.length; i++) {
            let item = items[i];
            console.log(item)
            let image = item.getElementsByTagName("itunes:image")[0].getAttribute("href")
            let title = item.querySelector("title").innerHTML.replace("<![CDATA[", "").replace("]]>", "")
            let author = item.getElementsByTagName("dc:creator")[0].innerHTML.replace("<![CDATA[", "").replace("]]>", "")
            let url = item.querySelector("enclosure").getAttribute("url")
            let podcast = { name: title, url: url, image: image, author: author }
            this.favoritePodcasts.push(podcast)
          }
        })
      })
    }
  }
</script>

Conclusion

This was a fun and easy PWA to make and I will turn it into an Android application to put on the Google Play Store (learn how with my PWA to APK course). Some features I will add will include:

  • save favorites locally using indexDB
  • create queue that can be skipped
  • download podcast episodes
  • Have everything play via the WebAudio API and add visualizations.
Posted on 1 Comment

Building A Twitter Discord Bot In Node.JS

UPDATES FOR V2 TWITTER API!!!!

Discord Is Awesome, So Is Twitter

I have a discord server and I have a pretty decent sized Twitter following. I wanted to add a bot that followed my tweets and sent them as messages in my #twitter channel on my server. I refuse to pay for software and I have a hyper distrust of other’s software so I wrote my own. In this node.js script I have one file main.js that uses 3 dependencies: twit (for Twitter streaming API), Discord.js (for well….Discord) and dotenv (to load environment variables). The script is ~30 lines long and is available on my Github.

Create a Discord Bot using the Twitter Streaming API

Main.js

require('dotenv').config()
const Twit = require('twit')
const Discord = require('discord.js');
const client = new Discord.Client();
var T = new Twit({
  consumer_key:         process.env.TWITTER_CONSUMER_KEY,
  consumer_secret:      process.env.TWITTER_CONSUMER_SECRET,
  access_token:         process.env.TWITTER_ACCESS_TOKEN,
  access_token_secret:  process.env.TWITTER_ACCESS_TOKEN_SECRET,
  timeout_ms:           60*1000,  // optional HTTP request timeout to apply to all requests.
  strictSSL:            true,     // optional - requires SSL certificates to be valid.
})
client.login(process.env.DISCORD_TOKEN);
client.once('ready', () => {
  var stream = T.stream('statuses/filter', { follow: [process.env.TWITTER_USER_ID] })
  stream.on('tweet', function (tweet) {
    //...
    var url = "https://twitter.com/" + tweet.user.screen_name + "/status/" + tweet.id_str;
    try {
        let channel = client.channels.fetch(process.env.DISCORD_CHANNEL_ID).then(channel => {
          channel.send(url)
        }).catch(err => {
          console.log(err)
        })
    } catch (error) {
            console.error(error);
    }
  })
})

.env file to load variables.

TWITTER_CONSUMER_KEY=
TWITTER_CONSUMER_SECRET=
TWITTER_ACCESS_TOKEN=
TWITTER_ACCESS_TOKEN_SECRET=
TWITTER_USER_ID=
DISCORD_TOKEN=
DISCORD_CHANNEL_ID=

Now all you have to do is run it using the following command

node main.js

Don’t forget to get your application keys from the Discord developer portal and the Twitter developer portal. I made a Youtube video showing the process in detail. Want to start making money and living a #CodeLife? Join the group today!