IndyWatch Education Feed Archiver

Go Back:30 Days | 7 Days | 2 Days | 1 Day

IndyWatch Education Feed Today.

Go Forward:1 Day | 2 Days | 7 Days | 30 Days

IndyWatch Education Feed was generated at Community Resources IndyWatch.

Saturday, 15 December

01:26

Want amazing free coding tutorials? Subscribe to these YouTube Channels. freeCodeCamp.org - Medium

Want excellent free coding tutorials? Subscribe to these YouTube channels.

There are so many great FREE software tutorials and courses on YouTube!

I run the freeCodeCamp ad-free YouTube channel. We have full video courses and tutorials on many popular programming languages and frameworks (including JavaScript, Python, Java, Ruby, C, C++, Angular, and more).

Ive also come across many other YouTube channels that provide amazing free programming tutorials. The free tutorials on these channels are as good or better than ones you would pay for.

In this article I list ten YouTube channels you should subscribe to if you want to improve your coding skills. These are in no particular order.

It can be very helpful to watch tutorials on the same topic from different creators. Learning from multiple perspectives can help you understand the concepts in a deeper way.

There are many great channels that I did not have room for on this list. Let people know in the comment section about other channels they should check out for free programming tutorials.

Coding Train

The Coding Train

It takes a lot of skill to record high quality tutorials live with no editing. But that is exactly what Daniel Shiffman of Coding Train does. His teaches complicated topics in a fun way that is easy to understand for beginners. Once you see the channels introduction video, you will know why you need to subscribe immediately.

Traversy Media

Traversy Media

Brad Traversys passion for excellence really shows off in his videos. His no-fluff style is friendly and down-to-earth. He seems to understand exactly what self-taught programmers need to know. His channel features tutorials on a wide variety of web development frameworks and languages.

Derek Banas

Derek Banas

Derek Banas is truly the jack-of-all trades programmer. He has professional-level tutorials on almost all popular (and some less popular) programming languages. He currently has more subscribers than anyone else on this list and it is completely deserved. His channel is a good fi...

Go Back:30 Days | 7 Days | 2 Days | 1 Day

IndyWatch Education Feed Today.

Go Forward:1 Day | 2 Days | 7 Days | 30 Days

Friday, 14 December

21:44

An introduction to high-dimensional hyper-parameter tuning freeCodeCamp.org - Medium

Best practices for optimizing ML models

If you ever struggled with tuning Machine Learning (ML) models, you are reading the right piece.

Hyper-parameter tuning refers to the problem of finding an optimal set of parameter values for a learning algorithm.

Usually, the process of choosing these values is a time-consuming task.

Even for simple algorithms like Linear Regression, finding the best set for the hyper-parameters can be tough. With Deep Learning, things get even worse.

Some of the parameters to tune when optimizing neural nets (NNs) include:

  • learning rate
  • momentum
  • regularization
  • dropout probability
  • batch normalization

In this short piece, we talk about the best practices for optimizing ML models. These practices come in hand mainly when the number of parameters to tune exceeds two or three.

The problem with Grid Search

Grid Search is usually a good choice when we have a small number of parameters to optimize. For two or even three different parameters, it might be the way to go.

For each hyper-parameter, we define a set of candidate values to explore.

Then, the idea is to exhaustively try every possible combination of the values of the individual parameters.

For each combination, we train and evaluate a different model.

In the end, we keep the one with the smallest generalization error.

https://medium.com/media/d23d26fc0ab9fd97911c98b077b2a6bc/href

The main problem with Grid Search is that it is an exponential time algorithm. Its cost grows exponentially with the number of parameters.

In other words, if we need to optimize p parameters and each one takes at most v values, it runs in O(v) time.

Also, Grid Search is not as effective in exploring the hyper-parameter space as we may think.

Take a look at the code above again. Using this setup, we are going to train a total of 256 different models. Note that if we decide to add one more parameter, the number of experiments would increase to 1024.

However, this setup only explores four different values for each hyper-parameter. That is it, we train 256 models to only explore four values of the learning rate, regularization, and so on.

Besides, Grid Search usually requires repetitive trials. Take the learning_rate_search values from the code above as an example.

learning_rate_search = [0.1, 0.01, 0.001, 0.0001]

Suppose that after our first run (256 model trials), we get the best model with a learning rate value of 0.01.

In this situation,...

17:19

How to make complex problems easier by decomposing and composing freeCodeCamp.org - Medium

Photo by rawpixel on Unsplash

Our natural way of dealing with complexity is to break it into smaller pieces and then put everything back together.

This is a two step process:

  • decompose the problem into smaller parts
  • compose the small parts to solve the problem

We decompose in smaller parts because they are easier to understand and implement. The smaller parts can be developed in parallel.

The process of decomposition is about assigning responsibilities and giving names. This makes it easy to talk and reason about. Once we identify a responsibility, we can reuse it.

Composition is about combining the small parts together and establishing a relationship between them. We decide the way these pieces communicate, the order in which they execute, and how data flows between them.

We find a system hard to understand even if it is split in smaller parts, if there is a high number of relations between these parts. In order to make a system easier to understand, we need to minimize the number of possible connections between its parts.

Object decomposition

Objects are more than state and behavior working together. Objects are things with responsibilities.

Decompose

In How to create a three layer application with React, I take a to-do list application and split the responsibilities between the following objects :

  • TodoDataService : responsible for the communication with the server Todo API
  • UserDataService : responsible for the communication with the server User API.
  • TodoStore : the domain store for managing to-dos. It is the single source of truth regarding to-dos.
  • UserStore : the domain store for managing users.
  • TodoListContainer : the root container component displaying the list of to-dos.

As you can see, when decomposing, I assign responsibilities and give names.

Compose

Next, I compose them together in a single function. This is the place where all objects are created and dependencies injected. It is called Composition Root.

import React from "react";
import ReactDOM from 'react-dom';
import TodoDataService from "./dataaccess/TodoDataService";
import UserDataService from "./dataaccess/UserDataService";
import TodoStore from "./stores/TodoStore";
import UserStore from "./stores/UserStore";
import TodoContainer from "./components/TodoContainer.jsx";

(function startApplication(){
let userDataService = User...

02:13

Picture this: the best image format for the web in 2019 freeCodeCamp.org - Medium

JPEG, WEBP, HEIC, AVIF? This guide will help you choose.

After decades of the unrivalled dominance of JPEG, recent years have witnessed the appearance of new formatsWebP and HEICthat challenge this position. They have only partial, but significant, support by major players among web browsers and mobile operating systems. Another new image formatAVIFis expected to enter the scene in 2019 with promise of sweeping through the whole web.

In this article, well start with a short revision of the classic formats, followed by a description of WebP and HEIC/HEIF. Well then move on to to explore AVIF, and end with a summary putting all the main points together.

Comments on advantages and drawbacks build both on the review of available authoritative reports and first-hand observations during the development and deployment of tools for image optimization pipelines in ecommerce workflows.

Classic image formats for the web with universal support

Lets remind ourselves, in chronological order, of the three most important classic formats for web images.

GIF

GIF supports LZW lossless compression and multiple frames that allow us to produce simple animations.

The major limitation of this format is that it is constrained to 256 colours. This was reasonable back when it was created in the late 80s, since the same limitation applied to existing displays. However, with the improvement of display technology it became apparent that it was not suitable to reproduce any smooth color gradients, like those found in photographic images. We can easily spot the color banding that it produces.

However, GIF allows lightweight animation with universal support. This feature has kept the format alive until today in use cases not sensitive to quality issues, the most typical being small animated images with few or no colors.

JPEG

The king of the image formats for web was developed to support digital photography workflows.

With a usual 24 bit depth, it provides far more color resolution (not to be confused with range or gamut) than the human eye can discern. It supports lossy compression by exploiting known mechanisms of human vision.

Our eyes are more sensitive to medium scales than to fine details. Consequently, JPEG allows us to discard fine details (high spatial frequencies), by an amount controlled by a quality factor. Less quality means less detail is preserved. Besides, we are much more sensitive to details with high luminance contrast than details with only chromatic contrast.

So, JPEG internally recodifies RGB (Red, Green, and Blue) images in one luminance and two chroma channels. This allows us to use chroma subsampling to discard more detail only in the chroma channels. Its worth noting that...

01:40

Learn to build your first bot in Telegram with Python freeCodeCamp.org - Medium

Photo by Florencia Potter on Unsplash

Imagine this, there is a message bot that will send you a random cute dog image whenever you want, sounds cool right? Lets make one!

For this tutorial, we are going to use Python 3, python-telegram-bot, and public API RandomDog.

At the end of this tutorial, you will have a stress relieving bot that will send you cute dog images every time you need it, yay!

Getting started

Before we start to write the program, we need to generate a token for our bot. The token is needed to access the Telegram API, and install the necessary dependencies.

1. Create a new bot in BotFather

If you want to make a bot in Telegram, you have to register your bot first before using it. When we register our bot, we will get the token to access the Telegram API.

Go to the BotFather (if you open it in desktop, make sure you have the Telegram app), then create new bot by sending the /newbot command. Follow the steps until you get the username and token for your bot. You can go to your bot by accessing this URL: https://telegram.me/YOUR_BOT_USERNAME and your token should looks like this.

704418931:AAEtcZ*************

2. Install the library

Since we are going to use a library for this tutorial, install it using this command.

pip3 install python-telegram-bot

If the library is successfully installed, then we are good to go.

Write the program

Lets make our first bot. This bot should return a dog image when we send the /bop command. To be able to do this, we can use the public API from RandomDog to help us generate random dog images.

The workflow of our bot is as simple as this:

access the API -> get the image URL -> send the image

1. Import the libraries

First, import all the libraries we need.

from telegram.ext import Updater, CommandHandler
import requests
import re

2. Access the API and get the image URL

Lets create a function to get the URL. Using the requests library, we can access the API and get the json data.

contents = requests.get('https://random.dog/woof.json&...

01:27

What is website accessibility? freeCodeCamp.org - Medium

Web accessibility doesnt have to be intimidating.

Web accessibility is getting a lot of attention these days, but it can be intimidating. Heres a simple introduction to web accessibility: what it is, why its important, and the benefits that come along with accessibility.

At the most basic level, web accessibility means building websites that are usable by as many people as possible.

In the US alone, 57 million people report having a disability. Thats one in every five peopleequivalent to the entire populations of New York and California combined. And around 30 million of those people report having a severe disability.

How can web developers make sure their sites are accessible to as many users as possible?

What makes a site inaccessible?

There are many ways that users might find a website to be inaccessible.

Some people may not be able to use a mouse. They may need to be able to scroll, click, navigate and interact with all parts of a website using only a keyboard or other device.

Others may have some form of color-blindness, so may have difficulty discerning links and buttons from other text content.

Dyslexia can cause some people to struggle to understand the content of a site.

For people with severe visual impairments, it is necessary for all content and interactivity on a page to be understandable to a screen reader. This is a program that reads the contents of a webpage to the user and lets them interact with the page.

There are even machines that will provide braille output from webpages.

Accessibility is a Web Standard

Ive barely scratched the surface of the accessibility challenges people can face on the web. It is impossible for the average web team to keep up with all these different situations that can prevent people from using and enjoying websites.

That is why the World Wide Web Consortium first drafted standards for developing accessible websites back in 1999.

This set of standards makes it easier for development teams to ensure their work is accessible to all. These standards are what you may have heard referenced as WCAG (sometimes pronounced wee-kag). It stands for Web Content Accessibility Guidelines.

These guidelines provide a detailed look at common patterns and areas that can cause usability issues in different situations. At a higher level though, they outline the four broad guidelines of web accessibility:

  • Perceivable: can all peo...

01:19

How to make your fancy SVG button accessible freeCodeCamp.org - Medium

Photo by Saketh Garuda on Unsplash

You may very well find yourself one day having to build some crazy button a designer dreamed-up. You might start reaching for that good old <div>, but easy there big-shifter lets try and use that <button> element youre avoiding 

Well start by simply grabbing the code for an SVG icon that we want to use. I quickly made a Chemex icon you can use here (I love me some coffee ). Paste that between a <button> tag in your HTML like so (the SVG code will be pretty lengthy).

Initial <button> with SVG code inside

We want this button stripped of its default styling, so lets give the button an id and well target it with some CSS.

Strip the default styling of the <button> so we can make it better 

Give the button a good width/height that is larger than our SVGthis will help the visibility of the outline. Speaking of, make sure the contrast ratio between your outline color and the background color passes this. Get rid of that pesky border and background, make sure the cursor is set to pointer.

At this point, you have a clickable button that, when clicked, shows the default outline your browser has chosen for focus states. Lets change that and make it better.

Giving the button some focus 

Now when we click or tab to our button, we get a cool little dashed outline that lets us know where were focused.

We also want to ensure that the SVG itself does not get an outline if clicked. And we want to make certain Firefox doesnt add its default dotted outline. While were at it, we can give the SVG a little hover effect.

Adding our flavorful hover effect 

Now we can get to the cool parts We dont want to annoy or confuse our screen reader users with our button. So we need a good short description of what to expect. You would also typically want visual users to have an idea of what it is theyre clicking on as well, for now lets kee...

Thursday, 13 December

21:25

After the cuts what? IN DEFENCE OF YOUTH WORK

Cutout_animation_example_s

No surprises perhaps though lots to agonise over but a new Unison report provides updated evidence on nearly ten years of decimation of local Youth Services. Two previous reports had revealed that between 2010 and 2016 budgets were reduced by a total of 387 million and that, between 2012 and 2016, 3652 youth work jobs had been lost, 603 centres closed and nearly 139,900 places for young people removed.

No doubt because there is now so little left to cut, the latest numbers are much smaller. Responses from 101 local authorities indicate that Youth Service budgets fell by 4 million in 2016-17, by 6 million in 2017-18 and by a predicted 3 million in the current financial year, lifting the total loss since 2010-11 to 400 million. Additional evidence on the size and extent of the cuts came in a YMCA report in May which put the reduction in spending on Youth Services by English and Welsh councils at between 2010/11 and 2016/17 at 750 millio...

16:54

How to build an age and gender multi-task predictor with deep learning in TensorFlow freeCodeCamp.org - Medium

Source: https://www.governmentciomedia.com/ai-takes-face-recognition-new-frontiers

In my last tutorial, you learned about how to combine a convolutional neural network and Long short-term memory (LTSM) to create captions given an image. In this tutorial, youll learn how to build and train a multi-task machine learning model to predict the age and gender of a subject in an image.

Overview

  • Introduction to age and gender model
  • Building a Multi-task Tensorflow Estimator
  • Training

Prerequisites

  • basic understanding of convolutional neural networks (CNN)
  • basic understanding of TensorFlow
  • GPU (optional)

Introduction to Age and Gender Model

In 2015, researchers from Computer Vision Lab, D-ITET, published a paper DEX and made public their IMDB-WIKI consisting of 500K+ face images with age and gender labels.

IMDB-WIKI Dataset source: https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/

DEX outlines an neural network architecture involving a pretrained imagenet vgg16 model that estimates the apparent age in face images. DEX placed first in ChaLearn LAP 2015a competition that deals with recognizing people in an imageoutperforming human reference.

Age as a classification problem

A conventional way of tackling an age estimation problem with an image as input would be using a regression-based model with mean-squared error as the loss function. DEX models this problem as a classification task, using a softmax classifier with each age represented as a unique class ranging from 1 to 101 and cross-entropy as the loss function.

Multi-task learning

Multi-task learning is a technique of training on multiple tasks through a shared architecture. Layers at the beginning of the network will learn a joint generalized representation, preventing overfitting to a specific task that may contain noise.

By training with a multi-task network, the network can be trained in parallel on both tasks. This reduces the infrastructure complexity to only one training pipeline. Additionally, the computation required for...

11:03

Highlights from Chrome Dev Summit 2018 freeCodeCamp.org - Medium

Have you heard of Google Chrome Dev Summit? If you havent heard of it and the awesome cool things Chrome engineers have been working on lately, this article is for you.

Im a front-end engineer working on an application that serves millions of users. I also use Chrome Dev Tools every day to debug and monitor performance. So I found it imperative to learn about the tools and technologies that will help me optimize my applications and contribute to building a better web. Debugging and optimizations become easier when you are aware of the tools to take advantage of, and metrics to look out for.

Chrome Dev Summit offered me the opportunity to hear about updates on these tools and technologies, and showed me avenues to contribute toward making these tools better. I learned a lot from Chrome engineers during the summit, and I would like you to benefit from that knowledge so we can build an awesome web experience together.

Chrome Dev Summit is an opportunity for Google Chrome engineers and leading web developers to celebrate the web platform, provide updates on their latest work, and get feedback from the community.

This year, developers from across the globe converged at Yerba Buena Center for the Arts in San Francisco, California for a two-day (12th and 13th November) exploration of modern web experiences. It was celebrated in style as Chrome engineers mark the 10-year anniversary of shipping Google Chrome, the most used web browser.

The event focused on what it means to build a fast, high-quality web experience using modern web technologies and best practices, as well as looking at new and exciting capabilities coming to the web platform. The major highlights are summarized below.

Performance Budgets

An increasing number of features in web applications today are also being accessed using low-end devices on high latency networks. Because of this, JavaScript becomes expensive thereby requiring performance budgeting.

A Performance Budget is a framework that allows you to determine what changes represent progress and what changes represent regression, taking into account a set of shared metrics and budgets for each made actionable

However, we need to have metrics in place to measure before we can improve on them, as it is impossible to measure what we do not track. When we care about exceptional user experience irrespective of device or network conditions, building a PWA with performance in mind becomes a priority.

To build a high-quality web e...

09:55

An introduction to high-dimensional hyper-parameter tuning freeCodeCamp.org - Medium

Best practices for optimizing ML models

If you ever struggled with tuning Machine Learning (ML) models, you are reading the right piece.

Hyper-parameter tuning refers to the problem of finding an optimal set of parameter values for a learning algorithm.

Usually, the process of choosing these values is a time-consuming task.

Even for simple algorithms like Linear Regression, finding the best set for the hyper-parameters can be tough. With Deep Learning, things get even worse.

Some of the parameters to tune when optimizing neural nets (NNs) include:

  • learning rate
  • momentum
  • regularization
  • dropout probability
  • batch normalization

In this short piece, we talk about the best practices for optimizing ML models. These practices come in hand mainly when the number of parameters to tune exceeds two or three.

The problem with Grid Search

Grid Search is usually a good choice when we have a small number of parameters to optimize. For two or even three different parameters, it might be the way to go.

For each hyper-parameter, we define a set of candidate values to explore.

Then, the idea is to exhaustively try every possible combination of the values of the individual parameters.

For each combination, we train and evaluate a different model.

In the end, we keep the one with the smallest generalization error.

https://medium.com/media/d23d26fc0ab9fd97911c98b077b2a6bc/href

The main problem with Grid Search is that it is an exponential time algorithm. Its cost grows exponentially with the number of parameters.

In other words, if we need to optimize p parameters and each one takes at most v values, it runs in O(v) time.

Also, Grid Search is not as effective in exploring the hyper-parameter space as we may think.

Take a look at the code above again. Using this setup, we are going to train a total of 256 different models. Note that if we decide to add one more parameter, the number of experiments would increase to 1024.

However, this setup only explores four different values for each hyper-parameter. That is it, we train 256 models to only explore four values of the learning rate, regularization, and so on.

Besides, Grid Search usually requires repetitive trials. Take the learning_rate_search values from the code above as an example.

learning_rate_search = [0.1, 0.01, 0.001, 0.0001]

Suppose that after our first run (256 model trials), we get the best model with a learning rate value of 0.01.

In this situation,...

06:46

What to keep in mind when architecting a system freeCodeCamp.org - Medium

6 Things to keep in mind when architecting a system

Architecture may sound like a scary or overwhelming subject, but actually, applying logic and approaching the problem methodologically simplifies the process drastically.

When you architect a system, service, or feature, you actually design a solution to a problem in a specific context. The solution should answer a real need and solve the problem at hand.

Throughout the text, Ill be using solution in order to emphasize that the systems, services, and features we build are part of a bigger flow.

When designing a solution think about the entire environment flow you affect.

  • Think about what happens before the data reaches your code
  • What triggers your feature or service
  • Who sends it?
  • Is it something automatic?
  • Is it a user?

This will also help you think about tests and edge cases you want to address, what happens after, who would use it and how.

1.Understand the problem

Start from understating the problem at hand and understand your boundaries. Dont optimize for an unknown future, focus on the current situation and most importantly, dont make assumptions.Dont limit yourself by requirements that are not there.

Make sure you have all the information you need to understand the problem, dont be afraid to do research, Google is your friend ;)
Photo by rawpixel on Unsplash

2. Understand your boundaries and set priorities

Solution architecture is always a trade-off between concerns, such as resilience, security, data integrity, throughput, scalability, performance and of course cost.

Think about value vs friction

Understand your constraints. What are your must-haves. If you have a product team, work with them in order to understand the business need, impact and SLAs. This will help you understand your considerations and limitations better.

Use data to make priorities, avoid assumptions when possible and be data-driven.

  • How many users?
  • Number of requests?
  • Size of requests?

Test your service in order to estimate the resources that are needed.

Make sure you address the maximum rate you want to support and not only the average (look at percentage vs average).

Think about solving the probl...

05:01

Rage against the Machine Learning: my battle with recommendation engines freeCodeCamp.org - Medium

The endless war that Im losing

A close up of the production facility at the Bristol Robotics Laboratory. Photo Credit: Louis Reed.

It recently came to my attention that I was waging a war across multiple fronts and fatigue had struckthey were winning. For months I had battled, fighting their persistence with my propensity to click x. I refused to fall into their traps, perfectly crafted to nullify my thoughts and reduce my resistance further. They assaulted me on every front, shoving their weapons in front of me wherever I looked. But Im a stubborn person, and in this war of attrition, Ill cling on to the bitter end.

Yesits a war. A war against recommendations and the engines that power them.

YouTube is forcing repeats of Big Bang Theory after I went on a small binge. Geeky physicists are funny, but their jokes get old. Spotify is still recommending me calming songs after I played some meditation music six months ago. Amazon is trying to force the same products down my throat despite buying them only weeks before. Seriously, if I buy a toilet seat from you, dont keep trying to sell me toilet seats.

Its frustrating to see the same content over and over again, and it makes it difficult to find the content I want to see. Bach was a mastermind, but even his variations on a theme can get a little boring. On top of this, the algorithms play to my weaknesses, and for that, I hate these recommendation algorithms.

In a perfect world, recommendation algorithms would introduce me to new products and content that I would love. They would help to create new ideas in me. They would inspire me. The algorithms would be like my best friend, telling me their new favourite things.

White robot human features. Photo Credit: Alex Knight.

Instead, many recommendation algorithms lean towards showing the most popular items. They focus on those that will pull in the most clicks. When algorithms are optimised for more time on-site or other similarly shallow metrics, then this is bound to happen. Its not a fault of the program, but the programmer. Creators must think more about the user experience and what the user really wants.

Just because I click something, it doesnt mean I actually like the contentI was braindead at the time. Just b...

04:32

How a StackOverflow account can secure you a seat at the recognised developer table freeCodeCamp.org - Medium

The screenshot was taken from StackExchange.com

I have never met a developer who hasnt heard of StackOverflow. This is where most of us mere mortals go when we are stuck trying to solve a programming problem. Sometimes the problem is just pure lack of documentation from an open source software we are implementing.

But from my years of experience, what Ive learned is that not all developers know the value of a strong StackOverflow account.

Personal StackOverflow account (top 7% this year)

Above is my personal StackOverflow account. I have given 156 answers and in turn have reached around 2 million developers, putting me on a top 7% of all the users in StackOverflow.

This has not been an easy task, as of todays writing (December 1, 2018) there are around 9.7 million users, 17 million questions and 26 million answers.

If you have tried submitting an answer in StackOverflow, you soon realize it is not a simple taskyou cant just answer random questions with half-cooked solutions. The forum works in a way where people vote for answers that are actually relevant and have helped them with the problem they are working on.

With 9.7 million users, its quite a challenge to ensure that your answer would be of any help to anyone, really. As soon as a question is posted dozens of developers are on the prowl to answer the question in their hopes of getting votes and in turn bolstering their respective profiles. However, this wild west style of answering a question can also be counterproductive, as the users have an option to downvote any answers that are of poor quality.

What are the perks and why bother?

Imagine everyone is applying for a specific company. It doesnt have to be a large and well-known company, it could easily be just an exemplary workplace nearby. Everyone wants to apply there.

Lets say, hypothetically, the company gives out stocks options, is flexible with work arrangements, and office facilities include the infamous pool table, bean bags, and has free food. The typical ideal tech office!

The recruiter searches for your name, and finds that you are top 10% of all the engineers in the StackOverflow forum. As most recruiters today are aware of the online communities like StackOverflow, who do you think will have their foot in the door? Having a strong online presence acts as icing on the cake, and most of the time guarantees you an interview.

Of course, I am not saying that all the developers that have a good scoring on the online forum are of high-ca...

04:15

How to create a responsive Fixed-Data-Table with React Hooks freeCodeCamp.org - Medium

Hooks on the main board by Rphillip3418.

One of my projects uses a library called Fixed-Data-Table-2 (FDT2), and its great for efficiently rendering tons of rows of data.

Their documentation demonstrates a responsive table that resizes based on the browsers width and height.

I thought itd be cool to share this example using React Hooks.

What are React Hooks?

Theyre functions that give you React features like state and lifecycle hooks without ES6 classes.

Some benefits are

  • isolating stateful logic, making it easier to test
  • sharing stateful logic without render props or higher-order components
  • separating your apps concerns based on logic, not lifecycle hooks
  • avoiding ES6 classes, because theyre quirky, not actually classes, and trip up even experienced JavaScript developers

For more detail see Reacts official Hooks intro.

WARNING: Dont use in production!

At the time of this writing, Hooks are in alpha. Their API can change at any time.

I recommend you experiment, have fun, and use Hooks in your side projects, but not in production code until theyre stable.

The goal

Well be building a responsive Fixed-Data-Table. It wont be too narrow or too wide for our page, itll fit just right!

Setup

Here are the GitHub and CodeSandbox links.

git clone https://github.com/yazeedb/Responsive-FDT2-Hooks/
cd Responsive-FDT2-Hooks
npm install

The master branch has the finished project, so checkout the start branch if you wish to follow along.

git checkout start

And run the project.

npm start

The app should be running on localhost:3000. Lets start coding.

Importing table styles

First youll want to import FDT2s stylesheet in index.js, so your table doesnt look whacky.

Generating fake data

Our table needs data, right? Create a file in src folder called getData.js.

Well use the awesome...

03:16

JavaScript Inheritance and the Prototype Chain freeCodeCamp.org - Medium

This post is designed to be read after you read JavaScript Private and Public Class Fields.

If you prefer to watch a video instead,

https://medium.com/media/3c8ff0a29c1f60dfe13cfe64baa87676/href

Previously we learned how to create an Animal class both in ES5 as well as in ES6. We also learned how to share methods across those classes using JavaScripts prototype. To review, heres the code we saw in an earlier post.

// ES5
function Animal (name, energy) {
this.name = name
this.energy = energy
}
Animal.prototype.eat = function (amount) {
console.log(`${this.name} is eating.`)
this.energy += amount
}
Animal.prototype.sleep = function (length) {
console.log(`${this.name} is sleeping.`)
this.energy += length
}
Animal.prototype.play = function (length) {
console.log(`${this.name} is playing.`)
this.energy -= length
}
const leo = new Animal('Leo', 7)
// ES6
class Animal {
constructor(name, energy) {
this.name = name
this.energy = energy
}
eat(amount) {
console.log(`${this.name} is eating.`)
this.energy += amount
}
sleep() {
console.log(`${this.name} is sleeping.`)
this.energy += length
}
play() {
console.log(`${this.name} is playing.`)
this.energy -= length
}
}
const leo = new Animal('Leo', 7)

Now lets say we wanted to start making individual classes for specific animals. For example, what if we wanted to start making a bunch of dog instances. What properties and methods will these dogs have?

Well, similar to our Animal class, we could give each dog a name, an energy level, and the ability to eat, sleep, and play. Unique to our Dog class, we could also give them a breed property as well as the ability to bark. In ES5, our Dog class could look something like this

function Dog (name, energy, breed) {
this.name = name
this.energy = energy
this.breed = breed
}
Dog.prototype.eat = function (amount) {
console.log(`${this.name} is eating.`)
this.energy += amount
}
Dog.prototype.sleep = function (length)...

03:00

End of the Year Round-Up 2018 Coursera Blog

 

By Shravan Goli, Chief Product Officer, Coursera

In a world where 50% of the jobs are at the risk of automation, and more than 300M people entering the workforce in the next 10 years, Courseras goal is to provide learners with access to high-quality content and skills that enable them to make a smooth transition into whatever is next whether its to learn something new, bone up on the skills for a new job or to enter a new industry altogether.

As a part of that mission, we added 675 new courses to the platform this year, giving our 37 million learners access to more than 3,100 courses and, 300+ Specializations. In 2018, we also doubled down on the number of degrees available on Coursera, giving learners a re-imagined degree experience that is highly flexible and scalable, including the launch of our first ever Bachelors degree. Our work with industry partners reached new heights, from collaborations with Google on the IT Support Professional Certificate, a program designed to help anyone become an IT support specialist in less than a year, to partnering with Amazon Web Services to close the IT skills gap.

With all of the advancement and change that weve seen across industries and job functions in 2018, we took a look back at the most popular skills and courses that our learners are seeking. This mix of courses represents the diversity of our learners. Some come to Coursera seeking career advancement, some life enhancement, others just have a passion for learning about the next big thing. Were excited to continue providing the highest quality education from the best institutions and companies around the world to our enthusiastic learners in 2019 and beyond!

...

03:00

Here are the skills youll need in 2019 Coursera Blog

The future of work conversation, catalyzed by the rapid advancement of technologies like machine learning and blockchain, and demand for skills in these areas, has dominated headlines in 2018. Rightfully so, these areas are seeing enormous demand from our 37 million global learners. Upon examination of our learner data, a few interesting trends are emerging. As we look to the year ahead, its important to pause and reflect on key skills that are in demand heading into next year.

Artificial Intelligence  

Today, 72% of CEOs regard AI competencies as the most important asset of a company. In turn, the demand for talent with advanced AI skills like machine learning and deep learning continues to increase. But as more engineers gain this technical knowledge, a new skills gap is forming. Now, were seeing demand from non-technical company leaders across industries to understand how to they can align their long-term business strategies with todays AI capabilities. This foundational knowledge will only become more important to executives next year, as AI continues to dominate business plans around the globe. We predict Andrews latest, AI for Everyone, designed with this challenge top of mind, will join deep technical AI knowledge on next years list.

Data Science

As web and mobile platforms remain ubiquitous, data science persists as a lucrative skill set across industries. McKinsey forecasts that in the coming year, the U.S. will experience a shortage of 1.5 million managers and analysts who can use big data to make effective decisions. A key lever to accessing those roles is understanding Python as it relates to data science. Predicted to be one of the fastest growing programming languages of 2019, Python is critical for both traditional data roles as well as computer and data science-related jobs in emerging technology sectors like cryptocurrency, ...

IndyWatch Education Feed Archiver

Go Back:30 Days | 7 Days | 2 Days | 1 Day

IndyWatch Education Feed Today.

Go Forward:1 Day | 2 Days | 7 Days | 30 Days

IndyWatch Education Feed was generated at Community Resources IndyWatch.

Resource generated at IndyWatch using aliasfeed and rawdog