Applications of AR - Tasks

Last version: March 8, 2022

For the lab measurement, the AR.js framework will be used for displaying the augmented reality contents on your smartphone. AR.js is a lightweight library for Augmented Reality on the Web, coming with features like Marker based, Location based and Image based AR. AR.js is a free and Open Source software, created by @jeromeetienne and maintained by @nicolocarpignoli. The first version was published in 2017, but with v3, advanced features appeared in 2020. If you are interested, you can also contribute to the project at https://github.com/AR-js-org/AR.js/

Lab report

The laboratory exercise will be done in pairs. While you are doing the tasks of the lab, please write a full measurement report that includes your names, descriptions and screenshots about the solutions of every subtask. You can use any software (e.g. MS Word / LaTeX / Google Docs / Markdown / etc.) for the documentation - but the final file should be a single PDF. Please note that you should work in pairs, but it is not allowed to share the source codes across pairs. At the end of the semester, we will run plagiarism analysis to check this. Of course, you are allowed to use any external sources (e.g. github, Wikipedia, etc.) by properly referencing them.

At the end of the lab, each of you should upload the report PDF as a <NEPTUN1>-<NEPTUN2>-OV05.ZIP file through MS Teams. Please compress your PDF files before uploading them..

Setting up

HTTPS website

For hosting the website, we will use an HTTPS web server, because accessing the smartphone camera from a browser requires https connection. In room IB211, each Windows computer has access to a shared drive (V:\), which can be attached with a script on the desktop. You can add your HTML files to the V:\www\ folder. After that, these are automatically mapped to a webserver, for which the link you can find in V:\www-link.txt. To check this, start a browser and go to https://smartlab.tmit.bme.hu:4443/<ID>/. If you put files into the V:\www\ folder, they will appear on the above server.

Suggested project layout

For the lab, you can create the following project layout:

index.html    # from where you can navigate to the solutions
task1.html    # solution of the first task (you can have any name for the html file)
task2.html    # solution of the second task
task3.html    # solution of the third task
...           # other HTML pages
assets/       # images, markers, object files
js/           # external JavaScript codes
styles/       # stylesheets
...           # anything else

In order to easily navigate from the main page to the solution of the tasks, we suggest the following structure for the index.html file:

<html>
<head><title>SmartCity-AR solution of 'username'</title></head>
<!-- defines the default zoom for mobile devices -->
<meta name="viewport" content="width=device-width, initial-scale=1" /> 
<body>
    <ul>
        <!-- navigation to the solutions of tasks -->
        <li><a href="task1.html">Task 1 / Basic</a></li>
        <li><a href="task2.html">Task 2 / Text</a></li>
        <li><a href="task3.html">Task 3 / Complex object</a></li>

        ...
        <li><a href="taskX.html">Task X</a></li>
    </ul>
</body>
</html>

But if you prefer, you can use any other peoject layout, index.html with more advanced solutions. Remember to always comment your code!

HTML / JavaScript text editor

For writing the HTML and JavaScript code, you can use NotePad++ / Sublime Text / etc. For more advanced development and testing, we suggest codepen.io.

Marker

For the marker based AR experience, we will use several predefined and custom-made markers. These work quite well from the computer screen (depending on the lightning conditions). Later, at home, after the lab, you can print these on white paper, to test printed markers.

Hiro marker kanji marker

Smartphone

A smartphone (Android / iPhone) or a laptop with a camera is a requirement for testing the marker based AR experience. For location based AR experince, a smartphone with GPS is required. If you don't have a smartphone with a camera, several webcams are available in IB211.

Basic tasks

Now you are ready to start developing the marker based AR experience and test in on your smartphone.

Displaying an AR object

Create task1.html and add the following code:

<!DOCTYPE html>
<html>
    <script src="https://aframe.io/releases/1.0.4/aframe.min.js"></script>
    <!-- we import arjs version without NFT but with marker 
    + location based support -->
    <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
    <body style="margin : 0px; overflow: hidden;">
        <a-scene embedded arjs>
            <!-- handle hiro marker -->
            <a-marker preset='hiro'>
                <a-box position='0 0.5 0' material='opacity: 0.5; side: double;color:red;'>
                    <!-- add a torus in a red box --> 
                    <a-torus-knot radius='0.26' radius-tubular='0.05'
                    animation="property: rotation; to:360 0 0; dur: 5000; easing: linear; loop: true">
                    </a-torus-knot>
                </a-box>
            </a-marker>

            <!-- add a camera to the scene that renders the objects for us -->
            <a-entity camera></a-entity>
        </a-scene>
    </body>
</html>

Try to understand the above HTML code! First, two JavaScript files are imported which are necessary for the AR.js framework. Next, in the body, you can find a-scene, a-marker, a-box, a-torus-knot, and a-entity camera elements embedded into each other. Can you find out which is useful for what feature?

Save the above files to V:\www\ and open the https://smartlab.tmit.bme.hu:4443/<ID>/ website (link for you in V:\www-link.txt) on your smartphone. To test on your smartphone, you will have to allow camera access - this is why a website with https was necessary. Navigate to task1, find the hiro marker and point the camera to this. Move the object around, check the animation (if there is any). Create a screenshot for the documentation. For the later tasks, don't forget to always document how you tried and what is the result.

Text and simple objects

Now you are ready to create simple objects. Replace the red torus with something else (box / sphere / etc.) using the aframe.io Introduction, and test it.

Add some text using a-text and test the scaling property (scale="120 120 120").

Complex objects

Ar.js can show complex objects defined in glTF format. Let's try this; add the following code to a HTML file:

<!DOCTYPE html>
<html>
    <script src="https://aframe.io/releases/1.0.4/aframe.min.js"></script>
    <!-- we import arjs version without NFT but with marker
    + location based support -->
    <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
    <body style="margin : 0px; overflow: hidden;">
        <a-scene embedded arjs>
            <!-- handle hiro marker -->
            <a-marker preset="hiro">
                <!-- raw.githack.com serves raw files directly from GitHub,
                with proper Content-Type headers -->
                <!-- arjs-cors-proxy.herokuapp.com is necessary for CORS (same origin policy of web browsers) -->
                <a-entity
                position="0 -1 0"
                scale="0.5 0.5 0.5"
                gltf-model="https://raw.githack.com/KhronosGroup/glTF-Sample-Models/master/2.0/SciFiHelmet/glTF/SciFiHelmet.gltf"
                >
                </a-entity>
            </a-marker>

            <!-- add a camera to the scene that renders the objects for us -->
            <a-entity camera></a-entity>
        </a-scene>
    </body>
</html>

Try some of the glTF samples from https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0. Please note that raw.githack.com serves raw files directly from GitHub, with proper Content-Type headers. If you choose another glTF model from the above github link, you might need to use raw.githack.com to update the link to the glTF model.

Embedded audio

iOS users: please note that the audio autoplay is disabled by default in Safari.

Audio can be played with the combination of the audio, a-assets and a-entity elements. Let's try this. Download some mp3 file that you like, put it to the 'assets' folder under your folder (V:\www\...), and extend your AR object with this audio:

<!DOCTYPE html>
<html>
    <script src="https://aframe.io/releases/1.0.4/aframe.min.js"></script>
    <!-- we import arjs version without NFT but with marker
    + location based support -->
    <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
    <body style="margin : 0px; overflow: hidden;">
        <a-scene embedded arjs>
            <a-assets>
                <audio id="helmet_sound" src="assets/helmet.mp3" response-type="arraybuffer" autoplay="true" loop></audio>
                <a-asset-item id="helmet" src="https://raw.githack.com/KhronosGroup/glTF-Sample-Models/master/2.0/SciFiHelmet/glTF/SciFiHelmet.gltf"></a-asset-item>
            </a-assets>

            <!-- handle hiro marker -->
            <a-marker preset="hiro">
                <!-- raw.githack.com serves raw files directly from GitHub,
                with proper Content-Type headers -->
                <!-- first entity: reference to the above glTF model -->
                <a-entity
                position="0 -1 0"
                scale="0.5 0.5 0.5"
                gltf-model="#helmet"
                >
                </a-entity>

                <!-- second entity: reference to the above audio -->
                <a-entity sound="src: #helmet_sound; volume: 1; loop: true">
                </a-entity>
            </a-marker>

            <!-- add a camera to the scene that renders the objects for us -->
            <a-entity camera></a-entity>
        </a-scene>
    </body>
</html>

Embedded video

Video can also be played with the combination of the video, a-assets and a-video elements. Try to solve this alone. Download some mp4 file and extend your AR object with this video. Depending on the type of the mp4, you might need to change the scale property of a-video. The aframe.io Introduction documentation will help in this.

You can observe that this way we are not able to adjust when the audio and/or video start. In a later, more complex example, we will have a solution for that.

Multiple markers

Until now, we were using the predefined 'hiro' marker. We can have multiple markers in the same AR environment, and each object will be shown at the corresponding marker. Find the three predefined markers of AR.js and try the following HTML code:

<!DOCTYPE html>
<html>
    <script src="https://aframe.io/releases/1.0.4/aframe.min.js"></script>
    <!-- we import arjs version without NFT but with marker
    + location based support -->
    <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
    <body style="margin : 0px; overflow: hidden;">
        <a-scene embedded arjs='sourceType: webcam; detectionMode: mono_and_matrix; matrixCodeType: 3x3;'>
            <!-- handle unknown marker -->
            <a-marker type='unknown'>
                <a-box depth="1" height="1" width="1" position='0 0.5 0' material='opacity: 0.5; side:double; color:blue;'></a-box>
            </a-marker>

            <!-- handle hiro marker -->
            <a-marker preset='hiro'>
                <a-box position='0 0.5 0' material='opacity: 0.5; side: double;color:red;'>
                    <a-torus-knot radius='0.26' radius-tubular='0.05'
                    animation="property: rotation; to:360 0 0; dur: 5000; easing: linear; loop: true">
                    </a-torus-knot>
                </a-box>
            </a-marker>

            <!-- handle kanji marker -->
            <a-marker preset='kanji'>
                <a-box position='0 0.5 0' material='opacity: 0.5; side: double;color:green;'>
                    <a-torus-knot radius='0.26' radius-tubular='0.05'
                    animation="property: rotation; to:360 0 0; dur: 5000; easing: linear; loop: true">
                    </a-torus-knot>
                </a-box>
            </a-marker>

            <!-- add a simple camera -->
            <a-entity camera></a-entity>
        </a-scene>
    </body>
</html>

Custom marker

We can create an own marker in a black box with the AR.js tools. Choose a high-contrast image that you like, and upload to https://ar-js-org.github.io/AR.js/three.js/examples/marker-training/examples/generator.html. Download both the image (this has to be printed out) and the marker pattern (this will be for the website). If you want, you can experiment with the pattern ratio and image size parameters (the higher the pattern ratio, the less ink is necessary for printing - but also, it will be more difficult for the smartphone camera to recognize the marker pattern). The pattern files have to be put to the https website, and you can use the new pattern the following way:

<!DOCTYPE html>
<html>
    <script src="https://aframe.io/releases/1.0.0/aframe.min.js"></script>
    <!-- we import arjs version without NFT but with marker + location based support -->
    <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
    <body style="margin : 0px; overflow: hidden;">
        <a-scene embedded arjs>
        <a-marker preset='custom' type='pattern' url='pattern-apple.patt'>
            <a-entity
            position="0 -1 0"
            scale="0.5 0.5 0.5"
            gltf-model="https://raw.githack.com/KhronosGroup/glTF-Sample-Models/master/2.0/RiggedFigure/glTF/RiggedFigure.gltf"
            ></a-entity>
        </a-marker>
        <a-entity camera></a-entity>
        </a-scene>
    </body>
</html>

Location-based AR

Now we are testing location-based (i.e., markerless) Augmented Reality by showing a text relative to your smartphone. To test on your smartphone, you will have to allow that the website can access your GPS position. Use the following code:

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <script src="https://aframe.io/releases/1.0.4/aframe.min.js"></script>
    <script src="https://unpkg.com/aframe-look-at-component@0.8.0/dist/aframe-look-at-component.min.js"></script>
    <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
    
  </head>

  <body style="margin: 0; overflow: hidden;">
    <a-scene
      vr-mode-ui="enabled: false"
      embedded
      arjs="sourceType: webcam; debugUIEnabled: false;"
    >
     <a-entity gps-entity-place="latitude: XXXX; longitude: XXXX;">
      <a-text
        value="Smart City lab - Augmented Reality."
        look-at="[gps-camera]"
        scale="10 10 10"
      ></a-text>
      </a-entity>
      <a-camera gps-camera rotation-reader> </a-camera>
    </a-scene>
  </body>
</html>

This gps-camera component enables the location AR; it has to be added to the camera entity. It makes possible to handle both position and rotation of the camera and it's used to determine where the user is pointing their device.

For another type of location-based AR, to test absolute GPS position, you can try the gps-entity-place component. Find your GPS location using https://www.gps-coordinates.net. You will need the latitude and longitude parameters. Try to develop this feature on your own using the AR.js official example about location-based-ar.

You can observe that this way we can add objects only to a single location. In a later, more complex example, we will have an advanced solution for displaying multiple objects at multiple locations.

longitude: 19.659845; latitude: -47.472665; for BME building I

What do you think, which of the two above AR types (i.e., marker-based, location-based) is suitable for what kind of environment? In the next section, we will have more advanced tasks to see this.

Advanced tasks

Marker-based AR: interactive

In the video example above, we were not able to start/stop the playback - it started automatically when opening the website. In order to have a playback control, we will use JavaScript.

Create a new HTML file with the following content:

<!-- from https://gist.github.com/RobTranquillo/8132191d48596dae68cef8e9cf48f812 -->

<!DOCTYPE html>
<html>
<head>
  <script src="https://aframe.io/releases/1.0.4/aframe.min.js"></script>
  <!-- we import arjs version without NFT but with marker + location based support -->
  <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
  <!-- this part defines the playback / control function and events -->
  <script>
    var markerFound = 0;
    AFRAME.registerComponent('button', {
        init: function ()
        {
          var elem = document.documentElement;
          var marker = document.querySelector("#marker");
          var fullbutton = document.querySelector("#fullscreen");
          var Video_0 = document.querySelector("#Video_Asset_0");
          var button = document.querySelector("#mutebutton");
          button.hidden = true;
          Video_0.pause();

          marker.addEventListener("markerFound", function (evt) {
            markerFound = 1;
            button.hidden = false;
            // Video_0.play();  // if you want the video 
            // to start immediently on marker detection, uncomment this
          });

          marker.addEventListener("markerLost", function (evt) {
            markerFound = 0;
            Video_0.pause();
            button.hidden = true;
          });

          // button for video play/pause
          button.addEventListener("click", function(evt){
            console.log("button clicked")
            if(Video_0.muted==true){
              button.innerHTML="Pause";
              Video_0.muted=false;
              Video_0.play();
            }else{
              button.innerHTML="Play";
              Video_0.muted=true;
              Video_0.pause();
            }
          });

          // button for full screen
          fullbutton.addEventListener("click",
            function (evt){
              if (fullscreen == 0) {
                if (elem.requestFullscreen) {
                    elem.requestFullscreen();
                } else if (elem.mozRequestFullScreen) {
                    /* Firefox */
                    elem.mozRequestFullScreen();
                } else if (elem.webkitRequestFullscreen) {
                    /* Chrome, Safari and Opera */
                    elem.webkitRequestFullscreen();
                } else if (elem.msRequestFullscreen) {
                    /* IE/Edge */
                    elem.msRequestFullscreen();
                }
                fullbutton.setAttribute("src", "assets/exit_fullscreen.png");
                fullscreen = 1;
              } else {
                if (document.exitFullscreen) {
                    document.exitFullscreen();
                } else if (document.webkitExitFullscreen) {
                    document.webkitExitFullscreen();
                } else if (document.mozCancelFullScreen) {
                    document.mozCancelFullScreen();
                } else if (document.msExitFullscreen) {
                    document.msExitFullscreen();
                }
                fullbutton.setAttribute("src", "assets/fullscreen.png");
                fullscreen = 0;
              }
            });
        },
        tick: function (totalTime, deltaTime)
        {
            var dTime = deltaTime / 1000;
            if (markerFound == 1) {
            }
        }
      });
  </script>
</head>
<body style='margin : 0px; overflow: hidden; font-family: Monospace;'>
  <div style='position: absolute; bottom: 10px; right: 30px; width:100%; text-align: center; z-index: 1;'>
    <button id="mutebutton" style='position: absolute; bottom: 10px' hidden>Play</button>
  </div>
  <div style='position: absolute; bottom: 5px; left: 30px; width:100%; text-align: right; z-index: 1;'>
    <input type="image" id="fullscreen" src="assets/fullscreen.png" style='position: absolute; bottom: 0px; right: 35px;'></input>
  </div>
  <a-scene embedded arjs="debugUIEnabled: false; sourceType: webcam" vr-mode-ui="enabled: false">
    <a-entity id="mouseCursor" cursor="rayOrigin: mouse" raycaster="objects: .intersectable; useWorldCoordinates: true;"></a-entity>
    <a-assets>
      <video id="Video_Asset_0" autoplay="false" loop crossorigin="anonymous" src="assets/your_sample.mp4" webkit-playsinline playsinline controls muted></video>
    </a-assets>
    <a-marker id="marker" preset="hiro" emitevents="true" button>
      <a-video src="#Video_Asset_0" id="Video_0" class="intersectable" width="1" height="1" position="0 0 0" rotation="0 0 0" color="#FFFFFF" transparent=False></a-video>
    </a-marker>
    <a-entity camera></a-entity>
  </a-scene>
</body>
</html>

You can find the necessary images here: exit_fullscreen.png, fullscreen.png. Also, don't forget to download an mp4 video file and change the video src in the above code.

Your task

Test the AR experience on your smartphone with the hiro marker, and try to understand the following code! What are the addEventListener functions used for?

Extend the above example with multiple markers (e.g. hiro, kanji, matrix, or can be custom made) and multiple video and/or audio objects! Test it thoroughly with your lab pair!

This task was based on https://gist.github.com/RobTranquillo/8132191d48596dae68cef8e9cf48f812

Location-based AR: outdoor

In the simple location example above, we only had a single location. Now we will create a solution with multiple locations and multiple objects. In the first step, create a HTML file with the following content:

<!-- from https://medium.com/swlh/build-your-location-based-augmented-reality-web-app-a841956eed2c -->
<!DOCTYPE html>
<html>
<head>
    <meta charset='utf-8'>
    <meta http-equiv='X-UA-Compatible' content='IE=edge'>
    <title>Multi-location AR</title>
    <script src='https://aframe.io/releases/1.0.4/aframe.min.js'></script>
    <!-- we import arjs version without NFT but with marker 
        + location based support -->
    <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
    <!-- for animating 3D models -->
    <script src="https://raw.githack.com/donmccurdy/aframe-extras/master/dist/aframe-extras.loaders.min.js"></script>
    <!-- three.js context -->
    <script>
        THREEx.ArToolkitContext.baseURL = 'https://raw.githack.com/jeromeetienne/ar.js/master/three.js/'
    </script>
</head>

<body style='margin: 0; overflow: hidden;'>
    <a-scene embedded arjs>
        <!-- The scale attribute is used because that model is pretty big -->
        <!-- and the custom rotation will make the model ‘look’ towards the user. -->
        <!-- The animation-mixer attribute tells the model to use its built-in animation. -->
        <a-entity gltf-model="assets/magnemite/scene.gltf" rotation="0 180 0" scale="0.15 0.15 0.15" gps-entity-place="longitude: 12.489820; latitude: 41.892590;" animation-mixer/>

        <a-camera gps-camera rotation-reader></a-camera>
    </a-scene>
</body>

You'll have to download Magnemite Pokémon glTS and put it into the assets folder. After extracting the files, you can test it on your smartphone, and should see an animated Magnemite above your head.

Next, we will clean our HTML file and add places through Javascript. We will end up with the same behavior as above with the following files:

taskX.html:

<!DOCTYPE html>
<html>
<head>
    <meta charset='utf-8'>
    <meta http-equiv='X-UA-Compatible' content='IE=edge'>
    <title>Multi-location AR</title>
    <script src='https://aframe.io/releases/1.0.4/aframe.min.js'></script>
    <!-- we import arjs version without NFT but with marker 
        + location based support -->
    <script src="https://smartlab.tmit.bme.hu:4443/AR.js/aframe/build/aframe-ar.js"></script>
    <!-- for animating 3D models -->
    <script src="https://raw.githack.com/donmccurdy/aframe-extras/master/dist/aframe-extras.loaders.min.js"></script>
    <!-- three.js context -->
    <script>
        THREEx.ArToolkitContext.baseURL = 'https://raw.githack.com/jeromeetienne/ar.js/master/three.js/'
    </script>

    <!-- this will be our new JavaScript file -->
    <script src="js/multi-location.js"></script>
    <link rel="stylesheet" type="text/css" href="css/style.css"/>
</head>

<body style='margin: 0; overflow: hidden;'>
    <div class="centered instructions"></div>
    <a-scene
       vr-mode-ui="enabled: false" embedded
        arjs='sourceType: webcam; sourceWidth:1280; sourceHeight:960; displayWidth: 1280; displayHeight: 960; debugUIEnabled: false;'>

        <!-- now this part is empty and will be filled from JavaScript -->

        <a-camera gps-camera rotation-reader></a-camera>
    </a-scene>
    <div class="centered">
        <button data-action="change"></button>
    </div>
</body>

js/multi-location.js:

// based on  https://medium.com/swlh/build-your-location-based-augmented-reality-web-app-a841956eed2c

window.onload = () => {
    const button = document.querySelector('button[data-action="change"]');
    button.innerText = '﹖';

    let places = staticLoadPlaces();
    renderPlaces(places);
};

function staticLoadPlaces() {
    return [
        {
            name: 'Pokèmon',
            location: {
                // lat: <your-latitude>,
                // lng: <your-longitude>,
            },
        },
    ];
}

var models = [
    {
        url: './assets/magnemite/scene.gltf',
        scale: '0.5 0.5 0.5',
        info: 'Magnemite, Lv. 5, HP 10/10',
        rotation: '0 180 0',
    },
    {
        url: './assets/articuno/scene.gltf',
        scale: '0.2 0.2 0.2',
        rotation: '0 180 0',
        info: 'Articuno, Lv. 80, HP 100/100',
    },
    {
        url: './assets/dragonite/scene.gltf',
        scale: '0.08 0.08 0.08',
        rotation: '0 180 0',
        info: 'Dragonite, Lv. 99, HP 150/150',
    },
];

var modelIndex = 0;
var setModel = function (model, entity) {
    if (model.scale) {
        entity.setAttribute('scale', model.scale);
    }

    if (model.rotation) {
        entity.setAttribute('rotation', model.rotation);
    }

    if (model.position) {
        entity.setAttribute('position', model.position);
    }

    entity.setAttribute('gltf-model', model.url);

    const div = document.querySelector('.instructions');
    div.innerText = model.info;
};

function renderPlaces(places) {
    let scene = document.querySelector('a-scene');

    places.forEach((place) => {
        let latitude = place.location.lat;
        let longitude = place.location.lng;

        let model = document.createElement('a-entity');
        model.setAttribute('gps-entity-place', `latitude: ${latitude}; longitude: ${longitude};`);

        setModel(models[modelIndex], model);

        model.setAttribute('animation-mixer', '');

        document.querySelector('button[data-action="change"]').addEventListener('click', function () {
            var entity = document.querySelector('[gps-entity-place]');
            modelIndex++;
            var newIndex = modelIndex % models.length;
            setModel(models[newIndex], entity);
        });

        scene.appendChild(model);
    });
}

Fill the css/style.css stylesheet with the following CSS rules:

.centered {
    height: 20%;
    justify-content: center;
    position: fixed;
    bottom: 0%;
    display: flex;
    flex-direction: row;
    width: 100%;
    margin: 0px auto;
    left: 0;
    right: 0;
}

button {
    display: flex;
    align-items: center;
    justify-content: center;
    border: 2px solid white;
    background-color: transparent;
    width: 2em;
    height: 2em;
    border-radius: 100%;
    font-size: 2em;
    background-color: rgba(0, 0, 0, 0.4);
    color: white;
    outline: none;
}

After this, download the Articuno and Dragonite Pokémons from here and put them to the assets folder. Change the latitude and longitude parameters of the Pokémons to any location that is nearby. After that, test the multi-location multi-object AR experience and enjoy!

This task was based on https://medium.com/swlh/build-your-location-based-augmented-reality-web-app-a841956eed2c

Bonus tasks

If you reached this point, than you filfilled all required tasks of the 'Smart City - Augmented Reality lab'. However, if you have time and interest, you can solve any of the bonus tasks below.

Image-based AR: custom image recognition

This is a bonus task, which is not obligatory. If you are interested, you can create image-based AR experience with your custom image. If you solve it, don't forget to include the source codes, descriptions and screenshots in the lab report. The following websites will help in this:

AR in a complex environment

This is a bonus task, which is not obligatory. In the above tasks, you learned how to use marker-based, location-based and image-based AR. Of course, you can combine these in a real-life scenario. If you are interested, you can create an AR solution for a complex environment that you like (e.g. museum, sport, games), with the combination of the various AR.js features. If you solve it, don't forget to include the source codes, descriptions and screenshots in the lab report.

Some interesting ideas:

Location-based AR dynamically

This is a bonus task, which is not obligatory. For a multi-location AR experience that can work at any place (and the GPS coordinates are not hard-coded), the idea is to first retrieve users position, and then dynamically load places of interest near them. In order to do that, we will need external APIs (e.g. Foursquare Places API). If you solve it, don't forget to include the source codes, descriptions and screenshots in the lab report.

For this task, follow the steps at https://medium.com/swlh/build-your-location-based-augmented-reality-web-app-a841956eed2c#b70c

Summary

In the Augmented Reality Applications lab of Smart City laboratory, you have learned the newest AR features and applied them on a smartphone, using the AR.js framework.

Feedback

As this is an experimental lab starting in the Spring semester of 2020, any feedback is welcome and will be useful for future students. Please feel free to write your positive or negative opinion to Tamás Gábor Csapó.

Lab report

Please don't forget that at the end of the lab, you should upload the project source codes and the report PDF through MS Teams.

Sources

Further examples