30 days of Javascript – Part 4 of 6

30 Days of Javascript

This is part 4 of 6 of my Javascript 30 journey by Wes Bos. It’s a free course and it accomplishes two things: 1) It pushes you to code every day and 2) it gives you bite-size challenges of vanilla Javascript to practice. The best way to learn to code is to do it over and over again every day even if you make mistakes.

If you missed part 1, 2 or 3 of my review, check them out here:

One thing that separates this course from others is the “real-world” factor. It’s not just pointless fake array exercises or meaningless for-loops and functions. This application of Javascript directly to DOM elements and CSS are examples of everyday scenarios a developer would find on the job. Wes mentioned the ideas behind these lessons came from very real issues he struggled with while working for his clients. This is stuff I see every day at work. So thankful for that aspect of this course.

In fact, I recently applied a Javascript30 solution to a problem at work. In my case, the problem in our software blocked many of our customers from receiving payment from their stores. I remembered coding the same issue from the Javascript30 lesson and was able to be resolve and implement a solution in a manner of hours, not days. Huge win! Never has a course had such a direct impact on my work.

Alright, let’s review some more code!


16 – Mouse Move Shadow

This course walks us through how to animate a text shadow as we move our mouse around the screen. It seems like a silly front-end exercise, but there are some great principles in the works here.

First, in order to do this, we target the .hero and it’s child h1.

const hero = document.querySelector('.hero');
const text = hero.querySelector('h1');

Pretty straightforward. Or is it?

Hidden problems

Wes brings up an interesting issue. When logging the mousemove event on the X and Y coordinates, we assign them like this:

  let {offsetX: x, offsetY: y} = e;
  //notice the object de-constructor here -- another ES2015 Freebie

Here we see the x and y coordinates reset when hovering over the .hero div and the h1. The top left corner of the screen is obviously, 0, 0. But so is the top left corner of the H1. Hmmm.

Huge time-saver

This is something I never knew existed. I am thankful that Wes covered this. If I ever encountered this myself, I could see wasting a ton of time going in circles on this hidden issue. Walking into my next project armed with this info just increases my real-world productivity. I hope you find this helpful as well.

So anyway, we correct the issue by checking if this (the hero) is equal to the event target (the H1) h1 by doing this:

  if(this !== e.target){
      x = x + e.target.offsetLeft;
      y = y + e.target.offsetTop;
  }

It took me several passes on this one to wrap my head around the difference in this and e.target. Of course, console.log() helps but initially, that’s not what I expected.

Math time!

Now that the above issue is corrected, we proceed by assigning a text-shadow value based on where the mouse is in the window. Now, I enjoy solving math problems, but I am not the best at coming up with formulas like this on my own, so the math here also needed a second or third pass to fully understand.

First, we set a “walk,” or a min/max range. In this case, it is 100. This means if I cap my text shadow at 50px in one direction, it would span 100 pixels (the walk) and max out at -50px in the other direction.

   const walk = 100;

Then we use the walk in conjunction with our x and y coordinates, rounding it up to the nearest whole number:

const xWalk = Math.round((x / width * walk) - (walk / 2));
const yWalk = Math.round((y / width * walk) - (walk / 2));

Then we apply the text-shadow with the previous math formula:

text.style.textShadow = `
   ${xWalk}px ${yWalk}px 0 rgba(255,0,255,0.7)`;

Then we got crazy with multiple text shadows:

text.style.textShadow = `
    ${xWalk}px ${yWalk}px 0 rgba(255,0,255,0.7),
    ${xWalk * -1}px ${yWalk}px 0 rgba(0,255,255,0.7),
    ${yWalk}px ${xWalk * -1}px 0 rgba(0,255,0,0.7),
    ${yWalk * -1}px ${yWalk}px 0 rgba(0,0,255,0.7)
    `;

Fun stuff!

See the full code here

See the Pen Mouse Move Shadow by Stephanie Denny (@stephanie_denny) on CodePen.0


17 – Sort without Articles

More array practice! So the goal here is to take this list of bands and sort them alphabetically, ignoring the “A”, “The” and “An” in the title.

const bands = ['The Plot in You', 'The Devil Wears Prada', 'Pierce the Veil', 'Norma Jean', 'The Bled', 'Say Anything', 'The Midway State', 'We Came as Romans', 'Counterparts', 'Oh, Sleeper', 'A Skylit Drive', 'Anywhere But Here', 'An Old Dog'];

Then Wes gave us the option to code this before he showed us how this was done. I really gave it a good effort and I came really close. My solution ended up printing the first band name multiple times, but I was using a for loop when a .map() would have done the trick.

Again, I love the ternary operator and shortening the sortedBands variable to a one-line ES2015 return statement. Nice and clean.

Before:

  const sortedBands = bands.sort(function(a, b){
    if(sort(a) > sort(b)){
      return 1
    } else {
      return -1
    }
  });

After:

const sortedBands = bands.sort((a, b) => strip(a) > strip(b) ? 1 : -1 ) //squeaky clean!

Finally, we assign the new band list to list items in the unordered list element in the document:

  
document.querySelector('#bands').innerHTML =
    sortedBands.map(band => `
<ul>
 	<li>${band}</li>
</ul>
`) .join('');
See the working example

See the Pen Sort Without Articles by Stephanie Denny (@stephanie_denny) on CodePen.0


18 – Adding Up Times with Reduce

I found this to be an interesting exercise, taking time stamps inside datasets, separating the hours, minutes and seconds, converting them from strings to whole numbers and adding the minutes together to create a whole time stamp of all video times.

First, we start out grabbing all the data-time attributes:

  const timeNodes = Array.from(document.querySelectorAll('[data-time]'));

Now we capture the time stamp, split it on the colon, convert the string into numbers using .map(parseFloat) and return the total seconds:

  const seconds = timeNodes
    .map(node => node.dataset.time)
    .map(timeCode => {
      const [mins, secs] = timeCode.split(':')
        .map(parseFloat);
      return (mins * 60) + secs;
    })
    .reduce((total, vidSeconds) => total + vidSeconds);

Finally, we convert the numbers into whole numbers and use a modulus operator to give us the remaining time:

  let secondsLeft = seconds;
  const hours = Math.floor(secondsLeft / 3600);
  secondsLeft = secondsLeft % 3600;

  const mins = Math.floor(secondsLeft / 60);
  secondsLeft = secondsLeft % 60;

As extra credit, I took the total time and appended it to the top of the page for the user to see.

  document.querySelector('#timeTotal').innerHTML = `${hours} : ${mins} : ${secondsLeft}`;

Then I added a small tag inside each li with a class of .video-time:

  let singleVideoTime = document.querySelectorAll('.video-time');

Using forEach() the runtime checks the parent element dataset and appends the time stamp inside the small tag.

  singleVideoTime.forEach(node => {
    node.innerHTML = node.parentElement.dataset.time
  });

The time stamp was left as a string, although if I needed to convert it to a whole number and break it up into minutes/seconds, I could just follow the pattern already set here.

I also couldn’t stand to leave it unstyled, so I quickly threw in some bootstrap and styled the ul and page titles.

My pretty example

See the Pen Adding up times with Reduce by Stephanie Denny (@stephanie_denny) on CodePen.0


19 – Webcam Fun

Holy schnikes!

Chris Farley - Holy Schnikes
There is a lot going on here.
I don’t have the blog capacity to cover all the details Wes went over in this section of the course. I’ll attempt to explain what each code block does briefly, but recommend you watch this course yourself and take in all the JS magic.

First, we grab our elements:

  const video = document.querySelector('.player');
  const canvas = document.querySelector('.photo');
  const ctx = canvas.getContext('2d');
  const strip = document.querySelector('.strip');
  const snap = document.querySelector('.snap');

Then we enable our webcam using promises:

  function getVideo() {
    navigator.mediaDevices.getUserMedia({ video: true, audio: false })
      .then(localMediaStream => {
        video.src = window.URL.createObjectURL(localMediaStream);
        video.play();
      })
      .catch(err => {
        console.error(`OH NO!!!`, err);
      });
  }
  getVideo();

Next, we pipe in the video at 16 fps and paint it to the HTML5 canvas element like so (notice the unique event listener at the bottom):

  function paintToCanavas() {
    const width = video.videoWidth;
    const height = video.videoHeight;
    canvas.width = width;
    canvas.height = height;

    return setInterval(() => {
      ctx.drawImage(video, 0, 0, width, height);
      // take the pixels out
      let pixels = ctx.getImageData(0, 0, width, height);
      // mess with them
      pixels = rgbSplit(pixels)
      // put them back
      ctx.putImageData(pixels, 0, 0);
    }, 16);
  }

  video.addEventListener('canplay', paintToCanavas); // we wait until "canplay" is ready to automatically pipe the video to the canvas

Then we capture the pixels in base/64 and assign it to an image tag and append it to the bottom of the canvas. This function is called directly on the button HTML element onclick():

  function takePhoto() {
    // played the sound - say cheese!
    snap.currentTime = 0;
    snap.play();

    // take the data out of the canvas
    const data = canvas.toDataURL('image/jpeg'); // you can use a png here too
    const link = document.createElement('a');
    link.href = data;
    link.setAttribute('download', 'handsome'); // or whatever attribute fits here 😉
    link.innerHTML = `<img src="${data}" alt="Handsome/Beautiful Person" />`;
    strip.insertBefore(link, strip.firsChild);
  }

The rest is setting up Green Screen and RGB split values using the sliders. We use for loops to iterate over all the pixel data and assign min and max values to the Red (0), Green (1), Blue (2), and Alpha (3) according to where they fall in the array.

WOW! The possibilities here are endless. The application here could be as basic as a fun photobooth app or used to assign a profile photo to an account. You could go deeper into facial recognition as an extra security layer. Lots of real-world relevance here.

See it in action!

You will have to enable your webcam here. Secure link below:

Launch Webcam Fun Code Example


20 – Speech Detection

Lesson 20 was fascinating! I wasn’t aware speech recognition existed in the browser. I’m sure many of us are already using some form of speech recognition such as our through smartphones or IOT home devices. But speech recognition in the browser opens up some interesting and potentially hilarious possibilities. Of course, Wes couldn’t resist converting the transcribed word into 💩 emojis.

First, we cover all our bases and ensure our modern browsers support this feature:

  window.SpeechRecognition = window.SpeechRecognition || window.webkitSpeechRecognition;

  const recognition = new SpeechRecognition();
  recognition.interimResults = true;

  recognition.start();

Make sure you give the browser permission to use your microphone. Now, it’s time to start building our paragraph tags to hold our speech!

  let p = document.createElement('p');
  const words = document.querySelector('.words');
  words.appendChild(p);

The magic happens in the eventListener. With speech recognition, the browser looks for the “result” event, so when that happens we create an array from the results and convert them into strings like this:

  recognition.addEventListener('result', e => {
    const transcript = Array.from(e.results)
      .map(result => result[0])
      .map(result => result.transcript)
      .join('');
    });

But there are two problems. One, if we stop speaking and start again, the speech recognition function doesn’t fire again. So Wes pointed out that we can add a second event listener to check for the “end” event. From there we can tell the browser to listen again once the user completes or pauses speaking.

  recognition.addEventListener('end', recognition.start);

Ok, now that is covered, we encounter another challenge. When the user starts the speech again, the browser overwrites the previous text. Gah! In this exercise, we want the text fully transcribed. So by inspecting the event.results object, nested wayyy down in there is an isFinal property. We can check that condition, and if true, we just create a new paragraph tag and append it to the div.word (this lives inside the ‘result’ event listener function, btw).

  if (e.results[0].isFinal) {
    p = document.createElement('p');
    words.appendChild(p);
  }

Pretty neat stuff. And all straight up vanilla Javascript, too.

Launch Speech Detection example

More to come!

I cannot believe how much we covered in just 20 lessons. Mind blowing stuff. I am super stoked about the next 10, wishing it could be more.

Stay tuned for Part 5 of the Javascript 30 challenge. Or head over to Wes Bos’ course and give it a try for yourself.

Advertisements

One thought on “30 days of Javascript – Part 4 of 6”

  1. Hey Stephanie! We met at Ignite. I was the person who had the ADHD: The Entrepreneur Superpower presentation.

    What happened to parts 5 and 6? 🙂

    How are you doing?

    I’d love to connect, like we’d planned to do, either in person or virtually.

Leave a Reply

Your email address will not be published. Required fields are marked *