TypeScript Tutorial

JavaScript’s universal browser support and dynamically typed nature makes it an ideal universal web language. However, as any developer who comes from an object-oriented background knows, JavaScript’s flexibility can become a liability as applications grow larger and larger. That’s why Microsoft created TypeScript to help developers produce better JavaScript utilizing the principles of object-oriented programming.

In this article, we’re going to go over what exactly TypeScript is as well as show you how to get started in using it.

  • What Is TypeScript?
  • The Benefits of TypeScript
  • Part 1) Installation and Set-up
  • Part 2) Compiling to JavaScript
  • Part 3) Static Typing
  • Part 4) Arrays
  • Part 5) Interfaces
  • Part 6) Classes
  • Part 7) Generics
  • Part 8) Modules and Namespaces
  • Part 9) Third-party Declaration Files


What Is TypeScript?

TypeScript is what’s called a superset of JavaScript. It’s not a replacement for JavaScript, nor does it add any new features to JavaScript code. Instead, TypeScript allows programmers to use object-oriented constructs in their code, which is then translated to JavaScript. It also includes handy features like type safety and compile-time type checking. Best of all, it’s completely free and open-source.

TypeScript 2.3 is the latest version of the language as of mid-2017. If you’re already familiar with the superset but have been out of the loop, TypeScript 2.0 introduced some major improvements including more comprehensive error-catching and the ability to obtain declaration files directly via npm install.

Although TypeScript was developed by Microsoft and comes standard with Visual Studio (an IDE software), it can be used in any environment. New and veteran coders alike can benefit from utilizing the superset. This TypeScript tutorial will give you the tools you need to start churning out better JavaScript for your web projects.

The Benefits of TypeScript

The are various benefits to using TypeScript, a few of these include:

  • Thanks to static typing, TypeScript code is more predictable and easier to debug than JavaScript.
  • Object-oriented features like modules and namespaces make organizing large code bases more manageable.
  • The compilation step catches errors before they reach runtime.
  • The popular framework Angular is written in TypeScript. Although you can also use regular JavaScript with Angular, most tutorials you’ll find for the framework are written in TypeScript. Anyone who wants to take full advantage of Angular and similar development platform will be better off knowing TypeScript.
  • TypeScript is similar to CoffeeScript, another language that compiles to JavaScript, but the former is more flexible than the latter thanks to static typing.

Part 1) Installation and Set-up

Visual Studio 2017 already comes equipped with the TypeScript plugin, and it is included in update 3 for Visual Studio 2015. If you’re using an older version of Visual Studio or a different environment, you can get the TypeScript source code and install it as a Node.js package:

npm install -g typescript

Once installed, you can start making TypeScript files and adding them to existing applications. TypeScript files can be identified by the *.ts extension. Whenever you save a TypeScript file, the Visual Studio plugin automatically generates a corresponding JavaScript file with the same name that’s ready for use. Every time you create a new TypeScript project, you’ll notice that an app.ts file containing the default code implementation is also generated.

Visual Studio offers a wonderful side-by-side view for corresponding TypeScript and JavaScript files. Whenever you save your TypeScript, you can immediately see the changes in your JavaScript.

This is similar to if you’re using codepen.io. With CodePen, you can write your TypeScript code and then view the compiled JavaScript. Below is a side-by-side comparison of some uncompiled TypeScript and compiled JavaScript code in CodePen.

The examples in this TypeScript tutorial will assume that you’re using Visual Studio, but several other IDEs and text editors also offer TypeScript support including auto-complete suggestions, error catching and built-in compilers. WebStorm, Vim, Atom, Emacs and Sublime Text all have either built-in support or plugins for TypeScript.

Part 2) Compiling to JavaScript

Since .ts files cannot be directly used in browsers, they must be compiled to regular JavaScript, which can be accomplished in a few ways. Aside from using an IDE or an automated task runner like Gulp, the simplest way is to use the command line tool tsc as follows:

tsc index.ts

The above command will give you a file named index.js. If a .js file with that name already exists, it will be overwritten.

It’s also possible to compile more than one file at once by simply listing them:

tsc index.ts main.ts

You can compile all of the .ts files in the current folder with the following command, but keep in mind that it doesn’t work recursively:

tsc *.ts

To automatically compile whenever changes are made to a file, you can set up a watcher process:

tsc index.ts --watch

If you’re working on a large project with many .ts files, it may be helpful to create a
tsconfig.json file. You can read more about configuration files in the TypeScript docs.

Part 3) Static Typing

The defining feature of TypeScript that separates it from JavaScript and CoffeeScript is static typing support, which allows you to declare variable types. The compiler makes sure that variables are assigned the correct types of values, and it can even make inferences if type declarations are omitted.

In addition to several primitive types like “number,” “boolean” and “string,” you can also use a dynamic type called “any.” “Any” is similar to the “dynamic” keyword in C# in that it allows you to assign any type of value to the variable. For this reason, TypeScript doesn’t flag type errors for “any” variables.

Variables are declared in TypeScript the same way they are in JavaScript. You can declare a type by adding a colon and the type name:

var num: number = 45;

In the above example, the variable num has been assigned the type “number.” You can read about all of the available data types in the TypeScript documents.

Part 4) Arrays

With TypeScript, you can create arrays using brackets as such:

var array: string[] = ['dog', 'cat'];
var first: string = array[0];

The above TypeScript would give you the following JavaScript:

var array = ['dog', 'cat'];
var first = array[0];

As you can see, arrays in TypeScript are accessed with a zero-based index. You can also build complex variables using primitive types:

var name = { firstName: 'Steve', lastName: 'Jobs' };

Even if you don’t explicitly assign a type to the variable as in the above example, TypeScript infers that “name” is a complex object with string variables. If you were to later assign anything other than a string to any of these variables, you would get a design-time error.

Part 5) Interfaces

Defining an interface allows you to type-check combinations of variables to make sure they go together. Interfaces do not translate to JavaScript; their sole purpose is to help developers. For example, you can define an interface with its properties and types as follows:

interface Food {
    name: string;
    calories: number;

You can then tell a function to expect an object that fits your “Food” interface to ensure that the right properties will always be available:

function speak(food: Food): void {
    console.log("This " + food.name + " contains " + food.calories + " calories.");

Now, when you define an object that has all of the properties your “Food” interface expects, types will automatically be inferred. If TypeScript suspects you’ve made a mistake, the compiler will let you know. For example, take the following code:

var cherry_pie = {
    name: "cherry pie",
    caloreis: 500

In the above example, one of our properties is misspelled, so you should expect an error message like this:

main.ts(16,7): error TS2345: Argument of type '{ name: string; caloreis: number; }
is not assignable to parameter of type 'Food'.
Property 'calories' is missing in type '{ name: string; caloreis: number; }'.

Property order doesn’t matter so long as the required properties are present and of the correct type; if that’s not the case, you’ll receive a warning from the compiler like the one above.

Part 6) Classes

Classes in TypeScript work mostly the same as classes in other object-oriented languages. Since the ECMAScript 2015 update was released, classes are also now native to JavaScript, but the rules for classes are a little stricter in TypeScript.

You can create a simple TypeScript class as follows:

class Menu {
    items: Array<string>
    pages: number;

Properties are public by default, but they can be made private. Once you establish some classes, you can then set up constructors to simplify creating new objects.

Instead of giving each class its own file, you can also combine short classes that go together, such as Point, Size, and Rectangle, into one file. Try to keep such combined classes under 150 lines of code to avoid any problems.

Part 7) Generics

When working with variables of different types, it may be helpful to set up a generic. Generics are reusable templates that allow single functions to accept arguments of different types. This technique is preferable to overusing “any” type variables since generics preserve types.

Take a look at the following script, which receives an argument and returns an array containing that same argument. The “T” following the function name indicates a generic. When the function is called, all instances of “T” will be replaced by the provided types:

function genericFunc<T>(argument: T): T[] {
    var arrayOfT: T[] = [];
    return arrayOfT;

var arrayFromString = genericFunc<string>("beep");
console.log(typeof arrayFromString[0])

var arrayFromNumber = genericFunc(45);
console.log(typeof arrayFromNumber[0])

In the above example, the type is manually set to string when we first called the function. This step isn’t required because the compiler knows which argument has been passed and can infer which type is appropriate the next time the function is called.

Even though it’s not a necessity, it’s good to get into the habit of always providing the type in case the compiler makes a mistake, which is something that can happen as your code becomes more complex. Combing generics with interfaces practically guarantees that the resulting JavaScript will be flawless.

Part 8) Modules and Namespaces

Modules provide yet another way to organize and consolidate code. If used effectively, splitting your code into reusable components can cut down on your project’s number of files thus making maintenance much easier. In TypeScript, internal modules are referred to as “namespaces” while external modules are just called modules.

TypeScript has a special syntax for exporting and importing modules; however, the language can’t directly handle the wiring between files, so we need some third party libraries to facilitate the use of external modules. Specifically, we use either RequireJS for browser apps or CommonJS for Node.js.

Imagine you’re working on a browser app, and you have two .ts files: one for exporting a function; and another for importing and calling the function. They would look as follows:


var sayHi = function(): void {

export = sayHi;

import sayHi = require('./exporter');

Now, to implement your module, you first need to download require.js and tack it on to a script tag. Then, you must set an extra parameter to let TypeScript know that you’re using require.js, which is done as follows:

tsc --module amd *.ts

Learning how to use modules in TypeScript takes time, yet the payoff is immense. You can read more about them in the TypeScript docs on modules.

Part 9) Third-party Declaration Files

When you need to use a library originally intended for regular JavaScript, you must apply a declaration file to make it TypeScript compatible. Declaration files, which have the extension .d.ts, hold information about libraries and their APIs. You could make your own declaration files by hand, yet you can just as easily find a .d.ts file for any library you need already created by someone else.

The best place to look is DefinitelyTyped, a massive public repository with literally thousands of libraries. While you’re there, you can also pick up Typings, a handy Node.js module for managing your TypeScript definitions.

TypeScript Tutorial In Summary

Using TypeScript is almost always preferable to directly writing JavaScript. Even if you’re completely comfortable with JavaScript, taking some time to learn TypeScript will make you a faster and more efficient web developer. TypeScript however, isn’t the only “alternative” JavaScript language there is. Other popular choices include CoffeeScript (as previously mentioned), Babel, and Elm.

As can be seen by the Google Trends graph above, the other choices are fairly popular as well. However, TypeScript seems to be rising in popularity, making it a great reason to learn more about it.

Essential Application Performance Management Tips and Tools

As an application’s architecture grows, so does the risk of a component potentially malfunctioning. Working with a massive team of developers and administrators should theoretically ensure a well-oiled machine, but people make mistakes too, so it’s helpful to a have automated tools that continually monitor all of the moving parts. That’s why we have application performance management tools.

  • What Is Application Performance Management?
  • The Evolution of Application Performance Management
  • The Basics of Application Performance Management
  • Defining “Performance”
  • The 5-Step Application Performance Management Plan
    • Step 1: Start With the End-User Experience
    • Step 2: Model the Application’s Architecture
    • Step 3: Trace Your User Transactions
    • Step 4: Do Some Deep-Diving
    • Step 5: Analyze the Data
  • Application Performance Management Monitoring Tools
  • Application Performance Management Tools for Writing Code
  • More Application Performance Management Tips
    • 1. Pick Your Priorities
    • 2. Be Proactive
    • 3. Set Up Alerts and Delegate Responsibilities
    • 4. Look for Geographical Response Time Discrepancies
    • 5. Centralize Your IT Response Procedures
    • 6. Share Your SLA
    • 7. Regularly Review Your Test Results
    • 8. Issue Data Reports for Everyone
    • 9. Ensure Quality at Every Step
    • 10. Choose Your Third Parties Wisely
  • The Future of Application Performance Management: APM vs UXM
  • Learn More About Application Performance Management


What Is Application Performance Management?

In web development, consistency is key. Application performance management (APM) is what developers do to make sure every user has a consistently positive experience. The principles of APM can be applied to a single program or a whole collection of applications that a business needs to operate.

Application performance management is an art, a field of study and a massive industry.
There’s a plethora of APM tools that monitor the speed of transactions between end-users and network infrastructure, which gives developers an end-to-end picture of where bottlenecks and service interruptions might occur. Such software tools conduct ongoing tests that track performance metrics and diagnose problems as they arise to ensure an optimal level of service at all times. Since there are many metrics to monitor, a complete application performance management plan usually requires a suite of APM tools.

You may see the acronym APM used to mean “application performance management” or “application performance monitoring.” The terms are basically synonymous, but a proper APM strategy emphasizes both monitoring and management. This application performance management guide will focus on the various methodologies and software tools that developers use to meet their quality-assurance goals.

The Evolution of Application Performance Management

APM used to be considered a niche field that system admins worried about after an application was already up and running. Now, APM is often integrated throughout an application’s lifecycleincluding the pre-deployment and post-deployment stages. Therefore, the concept has become increasingly relevant to developers, testers and business teams.

For example, using APM software tools during the development process can help coders
more efficiently structure their code. At the same time, ops personnel can conduct synthetic testing across different platforms, browsers, and APIs to detect small performance issues before they become bigger ones down the line. Although business teams are primarily concerned with the bottom line, they may also use APM solutions to resolve issues related to online financial transactions. Ensuring consistent performance at all stages of development and anticipating bottlenecks lessens the probability of post-launch problems.

The Basics of Application Performance Management

Sluggish load times can stem from a number of issues related to APIs, servers, cloud-hosting services, databases, mobile carriers or the user’s browser. Multiple problems can lead to the same result: a frustrated user. A combination of load-testing, real-user monitoring, root-cause analysis and other APM techniques enables developers to pinpoint the exact problem.

Software that conducts basic availability monitoring, which involves testing IP protocols and network services, is an essential part of APM, but you also need to establish some optimal performance thresholds and set up real-time alerts that let you know when functionality dips below them. A comprehensive APM strategy must take into account dozens of factors that affect speed and reliability.

Below are some examples of components that need to be monitored and managed:

  • Basic server metrics such as CPU and memory
  • Performance of individual web requests
  • Performance of application dependencies including caching, databases and other web services
  • Code level performance profiling with thorough transaction traces
  • Application log data and errors
  • Application framework metrics such as performance counters and JMX MBeans
  • Other vital applications metrics as determined by the development team


Defining “Performance”

Application downtime and unavailability can certainly hurt a company’s bottom line, but mundane slowness is actually the biggest hindrance to user retention. That’s because potential customers are 10 times more likely to experience a slowdown than an outage.

As stated earlier, first impressions are of the utmost importance, so the two metrics you should be most concerned with are your time to first byte and how long it takes for all above-the-fold content to be displayed. APM tools can emulate and monitor all types of devices and browsers to determine how fast pages render for users from all around the globe. Let these “visual user experiences” guide your optimization efforts, and everything else will fall into place.

The 5-Step Application Performance Management Plan

Tech research firm Gartner created a handy guide to help individuals and development teams make an APM plan for their projects. Such a detailed plan may not be necessary for smaller applications, but following these five steps will ensure that all of your bases are covered.

Step 1: Start With the End-User Experience

Numbers only tell you so much, so you must see for yourself how users experience the application. Monitoring the end-user experience will help you identify how performance issues directly impact your audience.

Step 2: Model the Application’s Architecture

Make a visual representation of your application’s runtime architecture so that you can identify all of the different components and how they interact with one another.

Step 3: Trace Your User Transactions

Trace how transactions flow down possible paths of your architecture model to determine which nodes could be potential sources of problems for users.

Step 4: Do Some Deep-Diving

Set up deep-dive monitoring for each component that impacts user transactions.

Step 5: Analyze the Data

Conduct IT operations analytics on the data you collected from monitoring to identify weak links and anticipate potential end-user problems.

Professional development teams use a combination of different automated monitoring tools from various vendors to meet their application performance goals. Many APM solutions boast big data analytics; however, if you don’t consider the end-user experience, then there’s no point in collecting data. If you take anything away from Garter’s 5-step plan, remember to put users first and save data analysis for last.

Application Performance Management Monitoring Tools

A comprehensive overview of every APM tool could fill a book. Profit Bricks has compiled an impressive list of their top 40 APM tools. Stackify also has a thorough guide that compares some of the most popular APM solutions. Below is a collection of top-notch monitoring tools along with the languages they support:

  • New Relic APM (.NET, Java)
  • AppDynamics (.NET, Java, PHP, C++, Python, Node.js)
  • Stackify Retrace (.NET, .NET Core, Java)
  • DynaTrace (.NET, Java)
  • Scout (Ruby on Rails)
  • TraceView (.NET, Java, PHP, Python, Ruby on Rails, Node.js, GO)
  • ManageEngine Applications Manager (.NET, Java, Ruby on Rails)


Application Performance Management Tools for Writing Code

In addition to APM tools that run on servers, there are also tools that developers can use right from their workstations while writing and testing code. Below are a few popular tools for specific languages:

  • Glimpse (.NET)
  • XRebel (Java)
  • Zend Z-Ray (PHP)
  • Scout Devtrace (Ruby)
  • Stackify Prefix (.NET, Java)
  • Miniprofiler (.NET, Ruby, GO, Node.js)


More Application Performance Management Tips

No matter which APM tools you employ, you should use the following best practices to steer your overall performance management strategy. Many of these tips apply to large companies that must manage many applications at once, but some may also be useful to individual developers:

1. Pick Your Priorities

We all strive for perfection, but when resources are limited, you have to know your priorities. Businesses should determine which applications or components are most crucial to their operation and make sure they are always being monitored. Some transactions should be monitored more heavily than others, so you may want to establish different polling frequencies for different transactions.

2. Be Proactive

As a rule of thumb, if more than 35 percent of your performance issues are identified by users before the IT team notices them, then you need to do a better job of identifying potential bottlenecks, errors and constraints.

3. Set Up Alerts and Delegate Responsibilities

Who is in charge of fixing which problems? If you’re part of a big team, make sure you know who should be alerted when there are performance dips. You may want to configure a certain number of response time violations needed to trigger an alert to avoid “false alarms.”

4. Look for Geographical Response Time Discrepancies

For big businesses with offices spread over multiple locations, employees in remote geographic areas may experience slower response times. Therefore, it’s helpful to compare availability and response times across all locations. This is one of the major reasons why using a CDN is helpful.

5. Centralize Your IT Response Procedures

Large companies may depend on literally hundreds of applications, so purchasing and maintaining multiple monitoring products for each one may be impractical. Also, a lack of integration across monitoring tools makes it difficult to troubleshoot and draw conclusions from the data you collect. Therefore, if you’re handling APM for multiple applications, seek solutions that allow you to monitor as many metrics as you can at once.

6. Share Your SLA

Share your service level agreement reports with users and stakeholders. If you explicitly state your goals, then others can actually hold you accountable.

7. Regularly Review Your Test Results

While you wait to fix problems as they arise, keep the big picture in mind and periodically review all of your data to identify areas that need improvement. Mapping data over time can help you make important business decisions such as whether or not it’s worth it to invest in performance optimization.

8. Issue Data Reports for Everyone

Since different stakeholders care about different metrics, tailor data reports to each team and make sure everyone receives them on a regular basis.

9. Ensure Quality at Every Step

Quality control shouldn’t be a final step; load testing, functional testing, regression testing and performance testing should be conducted throughout the development cycle. This practice will greatly streamline APM down the road as it allows you to reuse your test scripts for production monitoring.

10. Choose Your Third Parties Wisely

While it’s tempting to assume that relying too heavily upon third parties leaves your application more vulnerable to performance problems, that’s not always the case. Selecting reliable third parties to support your app can be better than trying to handle too many things in-house.

The Future of Application Performance Management: APM vs UXM

We may soon see the term APM replaced with a new concept: User Experience Management, or UXM. UXM basically means the same as APM, but it puts extra emphasis on the user rather than metrics. Whereas traditional performance management requires an assortment of APM tools from different vendors, UXM tools are increasingly integrated into single-vendor solutions.

You can think of UXM as a more comprehensive approach that incorporates different APM strategies. For example, SmartBear’s AlertSite can be considered a UXM solution. It contains about a dozen toolsets for load testing, real user monitoring, transaction tracing, root-cause analysis, real browser recording, core synthetic monitoring and more.

There’s a tendency in corporate culture to obsess over data, which is understandable when there are investors and deadlines to worry about. Nonetheless, when it comes to application performance management, perceived performance is far more important than numbers. UXM aspires to integrate and unify APM solutions that prioritize user experience over data analytics.

Learn More About Application Performance Management

Every developer should have some understanding of the concept on APM. When you’re part of huge development team, it’s easy for individuals and departments to lose sight of the bigger picture. A comprehensive APM strategy allows everyone to share information throughout the development process so that potential problems get addressed before end-users ever experience them.

Using Google Lighthouse To Enhance Progressive Web Apps

As part of its effort to support progressive web apps, which offer a growing number of complex features, Google created Google Lighthouse to establish standards for today’s web. With innovation accelerating at an ever increasing pace, new developers sometimes feel as though they are drifting in the dark when it comes to optimizing their websites. Google Lighthouse hopes to serve as a beacon to guide developers in a direction that is best for all users.

What Is Google Lighthouse?

Google Lighthouse is an open-source auditing tool that helps developers improve the performance and accessibility of their web projects. Anyone can use it for free to see how their website stacks up against Google’s high standards. If SEO value is of importance to you, which it should be, following Google’s guidelines can significantly raise your standing in the search results.

How to Use Google Lighthouse

The easiest way to get started is to install the Google Lighthouse extension for Chrome.

Alternatively, you can run it by using the Node CLI. This requires Node 6 or later and can be installed using the following command:

npm install -g lighthouse

Once installed, run Lighthouse with the following command and replace yourwebsite.com with the actual website you want to test.

lighthouse https://yourwebsite.com/

If you prefer using the Chrome extension, simply navigate to a particular page within your browser and click the “Generate Report” button from the Lighthouse extension. Lighthouse will then tell you how the website measures up to Google’s standards. The report will explain your website’s strengths/weaknesses, and it will suggest ways to boost your score.

How Does Google Lighthouse Work?

Lighthouse works by running tests to simulate different situations that impact user experience. Upon running a Lighthouse report, you’ll receive four different grades for:

  1. Progressive Web App
  2. Performance
  3. Accessibility
  4. Best Practices
 Results from running a Lighthouse report on a simple PWA: https://airhorner.com/

Of course, numbers can only tell you so much, so you must see for yourself how users experience your website and make judgments about how to prioritize which bottlenecks you should focus on.

If Lighthouse gives your website a low grade at first, don’t be disappointed; take it as a challenge to improve your optimization skills. Following the pointers that Google gives you and using this guide as a reference will help get your project up to standard. Once you create your own set of best practices, you can go back and optimize your older web projects, and your future projects will be that much better.

Google Lighthouse 1.3 and the “Do Better Web”

Because it serves as the starting point for much of the web’s traffic, Google has an interest in making sure the websites in their top search results are truly of the highest caliber. That’s why, the company launched the “Do Better Web” initiative: to create web-wide standards and best practices.

As of mid-2017, Lighthouse 1.3 is the latest incarnation. It includes over 20 new suggestions for modernizing CSS and JavaScript features in addition to offering other specific recommendations that can improve overall website performance.

Another new feature is the Lighthouse web viewer, which lets you drag and drop your Lighthouse files to generate HTML reports that you can share with others via Github. After that, you can access and edit those reports by just adding ?gist=GIST_ID to the URL. This feature is especially helpful for teams working together on larger projects.

What Does Google Lighthouse Look For?

According to the official docs, Lighthouse grades your website on the following four criteria:

  1. Security: Is it being served from a secure origin?
  2. Accessibility: Does it work for all users?
  3. Perceptual Speed: Do users perceive it as fast?
  4. Offline Connectivity: Will it load offline or in unreliable network conditions?

Those questions are incredibly broad, so let’s break down some specific things that Lighthouse looks for:

1. Security

Google is cracking down on unencrypted websites in an effort to raise security standards for the overall web. Therefore, HTTPS is mandatory to protect your users from potential hackers. If you need a free SSL certificate for your CDN domain, consider using Let’s Encrypt. Here are some general tips to help improve the security of your site:

  • Review your Content Security Policy and migrate from HTTP to HTTPS. Make sure all of your links are properly updated.
  • Only depend on third parties that also use HTTPS.
  • Serve pages using HTTP Strict Transport Security, or HSTS, headers.


2. Accessibility

Ensuring accessibility means more than just making sure your app works in every browser. You should also look for ways to encourage user engagement. A major goal behind the concept of progressive web apps is making websites feel more like self-contained applications, which is why Lighthouse encourages developers to include an “add to homescreen” prompt. Pinning a favicon to the user’s desktop requires a web application manifest, and tools such as Real Favicon Generator can help you optimize your favicons for every browser, OS and platform.

If you inspect your web app manifest from the the Chrome DevTools application panel, you’ll see all of your favicons along with their start URL and theme colors. You can also set up a splash screen that draws from the background_color, name and icons in your web app manifest. Google recommends shipping a 192px homescreen icon and a 512px splashscreen. These considerations are very important if you plan on releasing a project through the app store since Google uses meta info in your manifest to categorize your app in their directories.

Once you set up your web app manifest, you can use app installer banners to prompt users whenever you deem appropriate. While it’s tempting to prompt every user upon their initial visit, many people are put off by such suggestions when they visit a website for the first time. You may get better results if you delay the prompt until after the user has engaged meaningfully with your app. For example, certain websites wait until a user reaches their order confirmation page before asking them to install a homescreen shortcut.

3. Perceptual Speed

It’s been said that numbers can only tell you so much, but programmers are getting better at quantifying subjective user experiences. In addition to the standard metrics such as transaction speed, Lighthouse also looks at factors that affect the user’s perception of speed using the Perceptual Speed Index.

How your content loads is just as important as how fast it loads. For example, if your fonts and images jitter as they load, users may assume your website isn’t working and hit the back button. Such decisions are made within mere milliseconds, so these little things can make a big difference in user retention.

“Content jumps” also have a major impact on perceptual speed. Returning users know where the content they want to see is located, so they often start scrolling while the page is still loading. If you haven’t specified pre-defined heights for your containing elements in CSS, such scrolling can send users up and down the page randomly as each element renders. Therefore, the user gets frustrated and loses confidence in your website’s performance.

Figuring out exactly how much physical space each element takes up can be tedious, but simply specifying a minimum height for your containing elements can go a long way in reducing content jumps. As to what your containing elements should look like while they are awaiting content, studies suggest that skeleton screens are preferable to progress bars and spinning wheels.

Other key indicators of perceptual speed include the “first meaningful paint” and “time to interact”. The first paint, or the time it takes for the user to see the first pixel of content, has long been a standard for measuring webpage performance; however, when we talk about the first “meaningful” paint, we refer to how long it takes for your unique primary content to load rather just nav bars and other things users are used to seeing. The first meaningful paint assures the visitor that they’ve come to the right place.

Time to interact, or TTI, is the amount of time it takes for your app to start responding to user inputs. Like most performance related matters, improving the TTI and first meaningful paint is best accomplished by making sure your web app is sending as little data as possible. If you set up a service worker, your site can load even faster for returning visitors.

4. Offline Connectivity

Speaking of service workers, the ability to function without a stable internet connection is perhaps the greatest achievement of progressive web apps. Using service workers, you can set it up so that some of your website’s files get saved by the user’s browser. That way, users can keep using the latest cached version of your website even if they get disconnected because much of the data is stored locally. Your site will also load much faster whenever the user returns since some resources are already stored on their own device.

Google offers a comprehensive explanation of service workers. If you’re new to the concept, IndexedDB is a popular client-side storage API to start off with. There are also a few WebPackplugins that support service worker caching.

General Google Lighthouse Tips

Similar to Google PageSpeed Insights, getting your Google Lighthouse score to 100 may take a fair amount of work. Don’t be discouraged if Google knocks you for dozens of things that you didn’t think were important. It’s up to you to decide which metrics matter for your website. Take each suggestion as a learning opportunity and a challenge to drive your next move. Here are a few things to keep in mind:

  1. Since Lighthouse is open-source, there’s a thriving community of contributors and a robust repository to draw from. The issues tracker is an excellent resource for trading tips with other developers about audit metrics and other Lighthouse related issues.
  2. If you want a very thorough walkthrough of what all goes into meeting Google’s standards, Medium.com has a step-by-step tutorial for creating a simple progressive web app with a 90+ score in Google Lighthouse using Ionic 2.
  3. One easy way to limit data requests is to reduce the size of your images where possible. You should inspect your bundle to see if you’re using any third-party code that you don’t need, and make sure you compress as much as you can. Webpack is a helpful tool for your managing dependencies, which is also essential for optimizing your project’s perceptual speed.
  4. Speaking of accessibility, another core principle of progressive web design is catering to mobile users, which is why Lighthouse will tell you to include a meta-viewport in the <head>of your documents. Doing so will ensure that your app works across the immense variety of browsers and devices.
  5. Be mindful that not all browsers currently support service workers, and make sure your site works for users who don’t have JavaScript enabled. Using the <noscript> tag will allow essential content to be displayed even without JavaScript.


As you add more content and features to your websites, make Google Lighthouse audits a routine part of your maintenance plan. Lighthouse will keep being updated, and it is likely to remain a staple of the web development community for many years to come, so everyone should get up to speed on Google’s standards as soon as possible.

Content Delivery Networks Creating Connected Classrooms

Connected Classrooms are not a pipe dream, nor a thing from the future. They exist here and now and are transforming education the world over, both online and in the real world.

What is a Connected Classroom?
Connected Classrooms offer learning that helps improve students’ classroom practice. This is achieved via a global network that allows students to expand upon their ideas with experts worldwide.

One example of such a classroom is “Connecting Classrooms” by the British Council. The idea behind such classrooms is to help students improve their knowledge and skills in today’s globalized economy.

How is This Achieved?
The backbone of these global classrooms is Content Delivery Networks (CDN). CDNs are the vessel for content delivery. We all interact with CDNs every day – this includes articles on news sites, online shopping, and social media feeds.

Using a CDN allows you to distribute your content in several places at the same time. This improves your coverage and is better for the end consumer by providing them increased methods of being exposed to your content.

Local PoPs facilitate information sharing. When someone in Paris accesses a US-hosted website, it is done through a local PoP (Post Office Protocol). This is much quicker than having the visitor’s requests and your responses travel back and forth across the ocean.

Tools of Connected Classrooms
Do you use Skype? You most likely do. The good news is that Skype destroys the notion of the traditional classroom by enabling students to listen and ask questions – it’s interactive. Students can share their progress with their teachers in a manner that’s never been seen before.

OneNote helps teachers connect. Information gets shared through one source from any device; teachers can assign tasks and share notes as well as track progress. OneNote also benefits students by providing them with their own sections for sharing content like videos and presentations.

Even a video game be a connected classroom. Children already love Minecraft. For example, art students can create sculptures within the video game that refer to a theme. Students can later discuss their creations and leave feedback on a website or via OneNote.

The Advantages of Connected Classrooms
Connected Classrooms are beneficial because they offer global collaboration. This allows teaching bodies to connect users to anyone they want, whenever, wherever. Connected Classrooms also offer flexible options through the benefits of CDNs; for example, CDNs offer efficient solutions such as voice, VoIP, internet access, media distribution, social media, cloud connectivity, and more.

The costs of teaching are also lower since using a single carrier is a far more efficient and affordable way of using bandwidth.

To conclude, CDNs allow for virtual classrooms wherein the student no longer must attend a local place of learning. Think of them as virtual classrooms – virtual classrooms where millions of terabytes are exchanged. The goal is to enhance your experiential learning. This makes up the core of research and education working environments today.

Using Big Data to Determine the Success of Your Business

The term “big data” refers to a large set of data that can be analyzed via computers and is an asset in establishing the foundation for any business. A business, irrespective of its size, is more likely to prosper if it relies upon a logical and scientific database. In terms of business, big data proves to be very advantageous in reaching the predetermined goals. It will be a huge mistake for a businessman to regard big data as invalid to his company. In fact, big data can do unexpected wonders in the corporate world.

Big data can be used to determine the success of your business in the following ways:

Big Data Provides Monetary Facilities for Any Business.
Big data plays a very important role in providing businesses with financial facilities. It generates various financial services that include compliance analytics and customer analytics. Moreover, credit card companies, banks, and wealth management advisories use big data for financial facilities.

Role of Big Data in Business Communications.
A business lacking communications serves no purpose. Big data acts as a mediator between the business and communications. The computational analysis of big data enhances the way communication is received, therefore lowering the risk of miscommunication. To make any business a success, communication is a must, and the base of that communication is laid by big data.

Linking Up of Depths of Knowledge in Business.
Big data is enriched with lots of information, and in a business, the depth and the knowledge acquired is necessary to make a business reach new levels of success.

Big Data Provides Better Understandings of the Business.
While there are numerous tools available to comprehend information, big data is the most reliable one. Huge sets of information can be collected through big data, which is very useful in the business. In addition, big data provides a logical and rational base for the business’s communication. Thus, work done by big data in a business is irreplaceable.

Whatever business you are in, the use of big data is a must. It provides logical and reasonable information, making it essential for a business to grow and prosper.

Using big data to facilitate the success of a business is a growing career that provides ample opportunity for achievement. Acquiring an online Master’s in Business Data Analytics can position you for amazing opportunities in a corporate world that hinges on big data.