Exploring HTMLPortalElement with React

HTMLPortalElement is a draft of a new HTML Element, very similar to iframes but with the big difference that it allows to navigate to the content of the “iframe” by using a page transition.


To know more about it, I recommend to read these references:

In this article, I will explain how to use this future feature to do a “Hello world” demo with React.

Getting started

First of all, to use this draft feature you’ll need Chrome Canary. Once you have it, activate the flag of Portals:


Next, we’ll test portals. Remember that portals need to be on the top level of our app (unlike it happens with iframes).


Hello world with HTMLPortalElement and React:

import React, { useState, useEffect, useRef } from 'react';
import { render } from 'react-dom';

function PortalExample() {
  if (!window.HTMLPortalElement) {
    return 'HTMLPortalElement is not supported in your browser.'

  return (

render(<PortalExample />, document.getElementById('root'));


We get a similar result than using an iframe:

Clipboard - June 8, 2019 3_52 PM.png

Nevertheless, we want a beautiful transition to navigate to the content of this page. How could we get this?

Navigating to a portal

As I said, there is a significant difference between portals and iframes; with portals we can navigate to the content. In order to do that, the element has the function activate to go to the page.

   // navigate to content
  onClick={({ target }) => target.activate()} 

Now we can navigate to the content. Although without any transition… yet:


Adding a page transition

Instead of calling the activate function on the onClick event, we are going to use the onClick event to add an extra css class with the transition. Then, we are going to use the onTransitionEnd event to control when the css transition is finished. After that, we’ll call the activate function.

Therefore, our css transition is going to scale the portal until the portal fits all the content of the page (width and height 100%).

React code:

import React, { useState } from 'react';
import { render } from 'react-dom';

import './style.css';

function PortalExample() {
  const [transition, setTransition] = useState(false)

  if (!window.HTMLPortalElement) {
    return 'HTMLPortalElement is not supported in your browser.'

  return (
      className={`portal ${transition ? 'portal-reveal' : ''}`}
      onClick={() => setTransition(true)}
      onTransitionEnd={(e) => e.propertyName === 'transform' && e.target.activate()}

render(<PortalExample />, document.getElementById('root'));



body {
  background-color: #212121;

.portal {
  position: fixed;
  width: 100%;
  cursor: pointer;
  height: 100%;
  transition: transform 0.4s;
  box-shadow: 0 0 20px 10px #999;
  transform: scale(0.4);

.portal.portal-reveal {
  transform: scale(1.0);


Finally, we get the page transition in our portal:


Code: https://github.com/aralroca/HTMLPortalElement-react-example

Benefits of portals

Portals are a new proposal to load pages as an iframe, allowing the navigation to the content with a beautiful transition and improving the user’s experience.

They can be useful for previews of videos / audio, so you can navigate to the content page without stop watching / listening the media at any moment.


Of course, here we are using a different origin (YouTube). Nevertheless, if we use the same origin, we can communicate with the portal at any moment and do things like displaying a beauty preview or loading the rest of the content after the portal is activated.


Portals are still a proposal and maybe it’s something we won’t see in the future. Whatever, if it finally exists, it’s going to be useful to preview content, especially, for media.


Don’t control everything! React forms

Forms are a crucial part of almost all applications. At least one of them is usually necessary: the “Sign in” page. In this article, we are going to explain the benefits of uncontrolled forms in React and how to do it as simple as possible to re-use it in every form. We are going to use the classic “Sign in” page as an example.


Difference between controlled and uncontrolled

To understand what “uncontrolled” means, first, we’ll see the meaning of “controlled”.

A common mistake in React is to try to control every single field of a form using a state and an onChange method. This way is usually chosen to allow the use of this state inside the onSubmit method, although it’s not the only and best way to get the fields.

controlled fields

<form onSubmit={onSignIn}>

          onChange={e => this.setState({ username: e.target.value )}

          onChange={e => this.setState({ password: e.target.value )}

      <button type="submit">
         Sign In

Then we can use the state directly in the onSignIn method.

onSignIn = () => {
  const { username, password } = this.state;
  // ...

These fields are controlled because every time that the state changes, the text rendered changes inside the input. Moreover, every time that the user types, the onChange event is fired to save the new state. If we type a username of 15 characters and a password of 8; 24 react renders will happen under the hood (one for each character + one extra for the first render).

This controlled behavior is useful if the state is used before submitting the form. For example, to validate dynamically the fields. Otherwise, if we want to use all the fields after submitting the form, it will be more useful to do it uncontrolled.

Uncontrolled fields are the natural way to write without a React state:

uncontrolled input

<form onSubmit={onSignIn}>



      <button type="submit">
         Sign In

In this case, the state is not necessary. We need these fields on the event onSubmit but it’s not necessary to store it at every change in the React state because we already have it in the event. This means that we only do 1 simple render for this component: The first render.

On the onSignIn function, we can find the username and password fields inside event.target.

onSignIn = (event) => {
      const [username, password] = Array.prototype
          .map(field => field.value)

  // ...

However, although we simplified it a little, it’s still quite ugly to repeat this Array.prototype.slice.call in every single form submit. Let’s see how to improve it.

Improving the uncontrolled way

Our goal here is to simplify the logic of every “submit” event in order to avoid the need of finding continuously the fields inside the event.target. We want something more enjoyable like:

onSignIn = ({ username, password }) => {
 // ...

In this case, we will provide the fields directly as an argument. This argument is an object with all the fields of the form when the key is the name attribute.

To achieve our goal, we can replace the form tag to our personal Component:


Our reusable personal Form Component could be:

function Form({ children, onSubmit, ...restOfProps }) {
  const onSubmitAllFields = useCallback(event => {

    const fields = Array.prototype.slice.call(event.target)
      .filter(field => field.name)
      .reduce((form, { name, value }) => ({
        [name]: typeof value === 'string'
          ? value.trim()
          : value,
      }), {})

  }, [onSubmit])

  return (
    <form {...restOfProps} onSubmit={onSubmitAllFields}>

export default memo(Form)

Thus, we are moving the repeating code that we always do in our forms: preventDefault, stopPropagation, extract fields + trim string fields.

Now, we can use this new approach by only changing one character, from “form” to “Form”.

Note: I’m using the new hooks API (proposal), even though it can also be written as a class component.


Both approaches; controlled and uncontrolled forms are great for different reasons. We have to know the difference to choose the best for any occasion. My advice would be: use normally uncontrolled unless you really need the state to do dynamic checks or to change dynamically the text of each input.

If you want to try the Form component, I added in npm:

npm install react-form-uncontrolled --save


Repo: https://github.com/SylcatOfficial/react-form-uncontrolled

Grouping AJAX requests in a pool

In this article I would like to explain what I did to improve the speed in the communication between client and server. It’s important to understand that this is not a global solution for all AJAX request. Instead, it can only be applied in some particular type of request, as we will see soon if you keep reading.

Note that in most projects other solutions could be more efficient.

What’s the initial problem?

I’m currently working in a complex React application where the user can mount their own interactive widgets by using React components. Some of these interactive widgets need to do some AJAX request to load / insert some data (or whatever) on componentDidMount, componentWillUnmount or more (as we will see soon).

To implement this first approach, we can make every interactive widget (React container) call the POST /whatever on componentDidMount method.


Image1. In this example is POST /evaluate

In this implementation, each container is the responsible of doing the corresponding POST /evaluate. Or, using Redux, each container is the responsible to dispatch an action that, in turn, will do the request. After resolving each promise, each container decides what to do with the evaluation.

At the beginning, in this example, is going to emit at least 5 requests at the same tick of the clock. And, after resolving these requests, React is going to change the DOM at least 5 times, in different renders.

This implementation can be enough quickly for some cases. However, remember that the user can mount their own page with a big amount of interactive widgets. So this means that 20, 30 or more request can be emitted at the same ticking.

Unfortunately, there is a limitation on how many requests we can emit at the same time, so the rest are added in a queue that increments the total time. Moreover, in this /evaluate we are evaluating the same things through different widgets (for example, the item “a” is evaluated 3 times in the Image1).

Our mission in this article is to improve the request time by grouping all these requests into one, and removing duplicates.


Type of request to group

Before starting the implementation, the first important step is to know which is the request target. We can’t group every type of request, at least without modifying the behaviour on back-side.

How should the request be?

  • It should accept an array as a parameter.
  • The response is an array in the same order.
  • If any item can’t be resolved, instead of using a 500 Internal Server Error, the status should be 200 OK. The error should be in the response array index.
  • Each item should spend approximately the same time to be resolved. If the evaluation of “a” is taking 10 times more than the evaluation of “f”, this wouldn’t be a good approach because we prefer to load each interactive widget independently.

Grouping AJAX requests in a container

After analysing the initial problem, a common solution we can apply, in order to improve the loading speed of the page, is using a parent container to group all the requests while removing the duplicated items.

This parent container in the componentDidMount method does this AJAX call (or uses a Redux Action to do that). Then, this parent container distributes the results to its children (or, using Redux, each children container gets their results from the store).


In this way, instead of emitting 20 or 30 request at the same time, we group all these request into one. Also, after resolving the promise of the request, React is going to render the new DOM for all the interactive widgets at the same time.

More problems on the way…

In the above example we only took care about componentDidMount method. However, in reality, each interactive widget can have an “interval” property in the configuration. This widgets are able to send different requests on each “interval” tick.


In this case we are having more troubles to group all requests emitted in each tick of the clock in the parent container. However, it’s possible. In order to fix the problem we can create a common interval in the parent container with the greatest common divisor of all the children intervals. This global interval checks in every tick which requests need to emit in order to group them. Also, another alternative is to create different intervals on the parent container without time duplicates.

By the way, let me tell you something else: Some interactive widgets can be connected and the “interval” property can be changed depending on the output of another widget.


More troubles… Still not impossible to group requests depending on each ticking by using a parent container, but maybe we need to re-think a painless and more flexible way to implement this.


Grouping AJAX requests in a pool

A different way, instead of implementing all the logic of all the cases in the parent container, is to use an AJAX pool to directly group all the request emitted in the same ticking into only one request.


The pool is adding in a queue all things to evaluate emitted in the same ticking. In the next tick it will do the request by sending all the queue as param.

To use this pool, it’s necessary that the interactive widgets use the corresponding service instead of sending directly the request.

Instead of:

axios.post('/evaluate', { data: [a, b] })
.then(res => {
 // ...



EvaluationService.evaluate([a, b])
.then(res => {
 // ...


These promises always return the filtered result to each widget.

Each service will use an AJAX pool or not, depending on the type of the request. In this case in the EvaluationService we are going to use this pool.

This EvaluationService is the responsible of initialising the pool, adding the items into the queue, removing duplicates and saving the indexes. Then, when the request is resolved, it will filter the required items from the total response.

import AjaxPool from './services/ajax-pool';

const pool = new AjaxPool();

export default class EvaluateService {
  static evaluate(data) {
    const id = pool.initPool();

    const indexes = data
      .map((item) => {
        let index = pool.findInQueue(id,
          existingItem => _.isEqual(existingItem, item),

        if (index === -1) {
          index = pool.addToQueue(id, exp);

        return index;

    return pool
      .request(id, '/evaluate', queue => ({  data: queue }), 'post')
      .then((allEvaluations) => indexes.map(index => allEvaluations[index]));

Every time we call the evaluate method of this service, it first calls the initPool to get the corresponding “id” of the pool. This “id” is unique for each AJAX request. If there are more than one execution in the same tick of the clock, the same “id” should be used in all the group.

The purpose of the AJAX pool is to resolve all the promises of the group with the same response, but using just one AJAX request.

import uuid from 'uuid';
import axios from 'axios';

const DEFAULT_DELAY = 0; // Wait the next ticking

export default class AjaxPool {
  constructor(milliseconds = DEFAULT_DELAY) {
    this.DELAY_MILLISECONDS = milliseconds;
    this.queues = {};
    this.needsInitialization = true;
    this.requests = {};
    this.numRequest = {};

   * Initialising the queue 
   initPool() {
     if (this.needsInitialization) {
       this.requestID = uuid();
       this.queues[this.requestID] = [];
       this.needsInitialization = false;
       this.numRequest[this.requestID] = 0;

     return this.requestID;

   findInQueue(id, method) {
     if (typeof method !== 'function') {
       return -1;

     return _.findIndex(this.queues[id], method);

   cleanRequest(id) {
     this.numRequest[id] -= 1;

     if (this.numRequest[id] === 0) {
       delete this.requests[id];
       delete this.queues[id];
       delete this.numRequest[id];

    * Add to queue
    * @param {any} queueElement 
    * @return {number} index of element on the queue
   addToQueue(id, queueElement) {
     return this.queues[id].push(queueElement) - 1;

   request(id, url, getData, method = 'get') {
     this.numRequest[id] += 1;
     return new Promise((res, rej) => {
       _.delay(() => {
         this.needsInitialization = true;

         if (!this.requests[id]) {
           const data = typeof getData === 'function' ? getData(this.queues[id]) || {} : {};
           this.requests[id] = axios[method](url, data);
         // For each request in the same "ticking" is doing one AJAX 
         // request, but all resolve the same promise with the same result
           .then((result) => {
             if (result.error) {
             } else {

           .catch((err) => {
       }, this.DELAY_MILLISECONDS);

In this case we won’t use a big delay, it’s just going to be 0 milliseconds to wait the next ticking. However, it’s possible to use some milliseconds as a param to construct the pool. For example, if we use 100ms, it will group more requests.

const pool = new AjaxPool(100);



📕 Codehttps://stackblitz.com/edit/ajax-pool


Grouping requests in a pool:

  • Improves the total loading time in Client, avoiding the addition of some requests in a queue.
  • The server has less requests, reducing costs.
  • It’s reusable and every component of the project can use it without extra logic.


  • It’s not always the best solution, only for a specific type of requests.


First steps with TensorFlow.js

I would like to do more articles explaining a little bit about all the machine learning and deep learning basics. I’m a beginner in this area, but I’d like to explain soon these concepts to create some interesting AI models.

Nevertheless, we don’t need a deep knowledge about machine learning to use some existing models. We can use some libraries like Keras, Tensorflow or TensorFlow.js. We are going to see here how to create basic AI models and use more sophisticated models with TensorFlow.js.

Although it’s not required a deep knowledge, we are going to explain few concepts.

What is a Model?

Or maybe a better question would be: ‘What is the reality?’. Yes, that’s quite complex to answer… We need to simplify it in order to understand it!

A way to represent a part of this simplified “reality”  is using a model. So; there are infinity kind of models: world maps, diagrams, etc.


It’s easier to understand the models that we can use without machine help. For example, if we want to do a model to represent the price of Barcelona houses regarding the size of the house:

First, we can collect some data:

Number of rooms Prices
3 131.000€
3 125.000€
4 235.000€
4 265.000€
5 535.000€

Then, we display this data on a 2D graph, 1 dimension for each param (price, rooms):


And… voilà! We can now draw a line and start predicting some prices of houses with 6, 7 or more rooms.

This model is named linear regression and it’s one of the most simple models to start in the machine learning world.

Of course this model is not good enough:

  1. There are only 5 examples so it’s not reliable enough.
  2. There are only 2 params (price, rooms), yet there are more factors that could have an effect on the price: district, the age of the house, etc.

For the first problem, we can deal with it by adding more examples, e. g. 1.000.000 examples instead of 5.

For the second problem, we can add more dimensions… right? With 2D chart we can understand the data and draw a line while in 3D dimensions we could also use a plane:

But, how to deal with more than 3D? 4D or 1000000D?

Our mind can’t visualize this on a chart but… good news! We can use maths and calculate hyperplanes in more than 3D and neural networks are a great tool for this!

By the way, I have good news for you; using TensorFlow.js you don’t need to be a math expert.

What is a neural network?

Before understanding what is a neural network, we need to know what is a neuron.

A neuron, in the real world looks similar to this:

neuron.gifThe most important parts of a neuron are:

  • Dendrites: It’s the input of the data.
  • Axon: It’s the output.
  • Synapse (not in the image): It’s the structure that permits a neuron to communicate with another neuron. It is responsible to pass electric signals between the nerve ending of the axon and a dendrite of a near neuron. These synapses are the key to learn because they increase or decrease the electrical activity depending on the usage.

A neuron in machine learning (simplified):

Neuron 2.png

  • Inputs: The parameters of the input.
  • Weights: Like synapses, their activity increase or decrease to adjust the neuron in order to establish a better linear regression.
  • Linear function: Each neuron is like a linear regression function so for a linear regression model we only need one neuron!
  • Activation function: We can apply some activation function to change the output from a scalar to another non-linear function. The more common; sigmoid, RELU and tanh.
  • Output: The computed output after applying the activation function.

The usage of an activation function is very useful, it’s the power of a neural network. Without any activation function it’s not possible to have a smart neuron network. The reason is that although you have multiple neurons in your network, the output of the neural network is always going to be a linear regression. We need some mechanism to deform this individual linear regressions to be non-linear to solve the non-linear problems.

Thanks to activation functions we can transform these linear functions to non-linear functions:

Training a model

Drawing a line in our chart, as in the 2D linear regression example, is enough for us to start predicting new data. Nevertheless, the idea of “deep learning” is that our neural network learn to write this line.

For a simple line we can use a very simple neural network with only one neuron, but for another models maybe we want to do more complex things like classify two groups of data. In this case, the “training” is going to learn how to draw something like this:


Remember that this is not complex because it’s in 2D.

Every model is a world, but the concept of “training” is very similar in all of them. The first step is drawing a random line, and improving it in a iteration algorithm, fixing the error in each iteration. This optimization algorithm has the name of Gradient Descent (there are more sophisticated algorithms as SGD or ADAM, with the same concept).

In order to understand the Gradient Descent, we need to know that every algorithm (linear regressor, logistic regressor, etc.) has a different cost function to measure this error.

The cost functions always converge in some point and can be convex and non-convex functions. The lowest converge point is found on the 0% error. Our aim is to achieve this point.


When we work with the Gradient Descent algorithm, we start in some random point of this cost function but, we don’t know where is it! Imagine that your are on the mountains, completely blind, and you need to walk down, step by step, to the lowest point. If the land is irregular (like non-convex functions), the descent is going to be more complex.


I’m not going to explain Gradient Descent algorithm deeply. Just remember that it’s the optimization algorithm to train the AI models to minimize the error of predictions. This algorithm requires time and GPU for matrix multiplications. This converge point is usually hard to achieve in the first execution so we need to fix some hyperparameters like the learning rate (size of the step down the hill) or add some regularization.

After the iterations of Gradient Descent we get a closer point to the converge point when the error is close to 0%. At this moment, we already have the model created and we are ready to start predicting!


Training a model with TensorFlow.js

TensorFlow.js provides us with an easy way to create neural networks.

At first, we are going to create a LinearModel class with a method trainModel.

For this kind of model we are going to use a sequential model. A sequential model is any model where the outputs of one layer are the inputs to the next layer, i.e. when the model topology is a simple ‘stack’ of layers, with no branching or skipping.

Inside the method trainModel we are going to define the layers (we are going to use only one because it’s enough for a Linear Regression problem):

import * as tf from '@tensorflow/tfjs';

* Linear model class
export default class LinearModel {
  * Train model
  async trainModel(xs, ys){
    const layers = tf.layers.dense({
      units: 1, // Dimensionality of the output space
      inputShape: [1], // Only one param
    const lossAndOptimizer = {
      loss: 'meanSquaredError',
      optimizer: 'sgd', // Stochastic gradient descent

    this.linearModel = tf.sequential();
    this.linearModel.add(layers); // Add the layer
    // Start the model training!
    await this.linearModel.fit(


To use this class:

const model = new LinearModel();

// xs and ys -> array of numbers (x-axis and y-axis)
await model.trainModel(xs, ys);

After this training, we are ready to start predicting!

Predicting with TensorFlow.js

Predicting normally is the easier part! Training a model requires to define some hyperparameters… but still, predicting is so simple. We are going to write the next method into the LinearRegressor class:

import * as tf from '@tensorflow/tfjs';

export default class LinearModel {

    return Array.from(
      .predict(tf.tensor2d([value], [1, 1]))

Now, we can use the prediction method in our code:

const prediction = model.predict(500); // Predict for the number 500
console.log(prediction) // => 420.423


You can play with the code here:

Use pre-trained models with TensorFlow.js

Learning to create models is the most difficult part; normalizing the data for training, deciding all the hyperparams correctly,  etc.  If you are a beginner in this area (like me) and you want to play with some models, you can use pre-trained models.

There are a lot of pre-trained models that you can use with TensorFlow.js. Moreover, you can import external models, created with TensorFlow or Keras.

For example, you can use the posenet model (Real-time human pose estimations) for funny projects:


📕 Code: https://github.com/aralroca/posenet-d3

It’s very easy to use:

import * as posenet from '@tensorflow-models/posenet';

// Constants
const imageScaleFactor = 0.5;
const outputStride = 16;
const flipHorizontal = true;
const weight = 0.5;

// Load the model
const net = await posenet.load(weight);

// Do predictions
const poses = await net

poses variable is this JSON:

  "score": 0.32371445304906,
  "keypoints": [
      "position": {
        "y": 76.291801452637,
        "x": 253.36747741699
      "part": "nose",
      "score": 0.99539834260941
      "position": {
        "y": 71.10383605957,
        "x": 253.54365539551
      "part": "leftEye",
      "score": 0.98781454563141
    // ...And for: rightEye, leftEar, rightEar, leftShoulder, rightShoulder
    // leftElbow, rightElbow, leftWrist, rightWrist, leftHip, rightHip,
    // leftKnee, rightKnee, leftAnkle, rightAnkle

Imagine how many funny projects you can develop only with this model!


📕 Code: https://github.com/aralroca/fishFollow-posenet-tfjs

Importing models from Keras

We can import external models into TensorFlow.js. In this example, we are going to use a Keras model for number recognition (h5 file format). For this, we need the tfjs_converter.

pip install tensorflowjs

Then, use the converter:

tensorflowjs_converter --input_format keras keras/cnn.h5 src/assets

Finally, you are ready to import the model into your JS code!

// Load model
const model = await tf.loadModel('./assets/model.json');

// Prepare image
let img = tf.fromPixels(imageData, 1);
img = img.reshape([1, 28, 28, 1]);
img = tf.cast(img, 'float32');

// Predict
const output = model.predict(img);

Few lines of code is enough to enjoy with the number recognition model from Keras into our JS code. Of course, now we can add more logic into this code to do something more useful, like a canvas to draw a number and then capture this image to predict the number.

📕 Code: https://github.com/aralroca/MNIST_React_TensorFlowJS

Why in the browser?

Training models in the browser can be very inefficient depending on the device. Even thought TensorFlow.js takes advantage of WebGL to train the model behind the scenes, it is 1.5-2x slower than TensorFlow Python.

However, before TensorFlow.js, it was impossible to use machine learning models directly in the browser without an API interaction. Now we can train and use models offline in our applications. Also, predictions are much faster because they don’t require the request to the server.

Another benefit is the low cost in server because now all these calculations are on client-side.


  • A model is a way to represent a simplified part of the reality and we can use it to predict things.
  • A good way to create models is using neural networks.
  • A good and easy tool to create neural networks is TensorFlow.js.