Development Update – February ’20

I have failed to make an update for a few months. My bad!

I have been hard at work however, and hope that a concrete multiplayer alpha test is within reach.

Progress has been made on a range of things:

  • Entirely revamped the AI system to support varying items, abilities, and behaviours
    • Written several basic AI routines
  • Expanded AI interaction system to encompass a wider range of possible interactions (dialogue, shouting, combat, etc)
  • Added a sound management system to maintain and play/switch tracks of music when areas change or combat is engaged
  • Added the first draft of a main menu:
  • Added some basic weather effects and laid the foundation for a weather managing system (snow seen above, rain below)
  • Modified the data structure of items to make statistics, values, and events more identifiable to outside objects
  • Significant progress on the inventory and item tooltip interfaces (WIP and much work left to do)
  • Photographed, manipulated and created 7 new photogrammetry assets.
  • Incorporated a few third party asset libraries to fill gaps
  • Added one new cursor
  • Loads of miscellaneous bugs

Lots of little stuff that should hopefully amount to some huge progress soon enough.

My next priorities are finally making the inventory and item UI fully usable and accurate, and the same with the AI system. Once these are in place I will begin working on some more basic gameflow, and then a multiplayer test should be approaching ready 🙂

Development Update – November ’19

Work continues. I have spent a lot of time crunching bugs and making foundations for important future work.

I’ve gotten a substantial amount done on the calendar system. Date and time is now recorded and proceeds cyclically, and the day/night cycle is fully in place and reflected across game systems. Here’s a basic look at the cycle, sped up significantly:

The tweet was from October, but now the system is much more developed.

I’ve also been working on the inventory: now, finally, droppable, equippable items are fully implemented and networkable. The UI is still incomplete and in need of work, but the functionality is there:

Picking up some junk in a test area

I’ve also started working on the AI systems. This is important – before now, the AI was the very basic “run at and hit” sort. Now, they record relationships with characters, can patrol, have different states, and will respond more “humanly” in combat. There’s some work yet to do, which I hope to outline in a specific upcoming post.

Finally, I’ve improved some of my development tools, making it much easier to place actors and dialogue trees.

Occlusion Masking

This will be a dev-focused post after some requests were made over how I implemented this.

A few videos of the feature are below:

The idea is to occlude foliage such as trees or leaves when the player passes behind them. This means the player never loses sight of their character and gameplay is not disrupted.

I will outline roughly how I achieved this effect. I am making the game with UE4 and as such will use specific terms, but the concept should translate to Unity or wherever else. The idea is fairly simple: to have a material function that creates a cylindrical mask at a starting point of the camera position, and extending forwards to N point.

Here is an image of the material function:

Bits of this may not be applicable to your specific scenario, so I will roughly explain which parts do what.

In the top left we have two inputs, one is the world position of the material we are applying this to, and another is the current camera position. Each of these are transformed into two dimensions and inserted into a Sphere mask, which is then inverted to ensure we have a “cutout” rather than the opposite. This alone would achieve a cutout effect and you could insert that into the output and be done with it.

In my game, I only wanted this cutout to extend so far. I only wanted it to take effect on things that were between my player and the camera, and not extend forever forwards. As a result, I have added a few extra nodes. You can see I get the distance between the two inputs by subtracting and using abs, getting the length of the vector, and inserting this into an if statement. I then have a global parameter collection that contains the distance between the player and the camera.

Again, this may not be necessary depending on usecase. However, the player in my game is able to zoom the camera in and out, and I want to make sure that doesn’t break this effect. So, the if statement on the right simply checks to see if the distance between our two points exceeds our preset value, and then either applies the mask or returns a fully white mask to ensure no opacity is applied if the object in question is “behind” the character.

The bottom nodes are me adding a noise effect to make the circle a little more interesting and “melted” looking, as you can see in the second gif. This isn’t required and again, you can apply at leisure or pile on other effects and multiply your eventual mask by them to see differing results.

The last thing that needs to be done is to simply open your desired material and apply this material function for the “opacity mask” node. You will need to set the shader type to “masked” if you haven’t already for this to take effect.

Let me know if this has been helpful or if you require any help, and please show off anything interesting you make using this effect!

Disclaimer: I am not a shader expert, so if there are more efficient ways of achieving this effect, please alert me as soon as possible 🙂


A lot of the models in Olusia are made via the art of Photogrammetry. In short – deriving models from a range of photographs.

Photogrammetry is a useful tool for getting really authentic-looking assets and is surprisingly easy to tackle. It’s good for developers who cannot produce quality assets on their own.

Photogrammetry-produced assets are available on various stores for use by devs. I suppose the best example is probably Quixel with their Megascans library. These look amazing. They’re expensive, but well produced and look really great when in-game.

I decided to give the process my own approach, however, and make my own assets.

1. Photography

The first and most obvious step is in the capturing of the object. One of my hobbies is photography and so I fortunately possessed the required tools to capture this information – a good camera and set of lenses. While you could technically produce some assets on basic bridge cameras or with phones, I’d not recommend it. Try and stick to a solid SLR or mirrorless camera with as large a sensor as possible, and use something between 35-50mm lens range to get rid of any potential lens distortion (it’s important later on for the software to be able to link photos together with a minimal amount of disruption, and lens barrel distortion or chromatic abberation will make this all the more difficult).

Stick the camera on full manual, stop down the aperture to make the depth of field as wide as possible, and use as low an ISO as you can to reduce noise. Then, just get to town taking pictures of the object. Go around it in a 360 degree circle snapping it from every angle. Adjust height every now and then to make sure every surface is captured. There’s no real prescriptive process here, just try and capture as much as you can.

An example of a tree stump I’ve captured
In these examples you can see the background is actually quite blurry. That’s not ideal, but fortunately it didn’t damage the capture.

Once you’ve taken as many photos as possible, there are a few post processing steps to take.

2. Post-processing

Once they’re all taken, open them in your suite of choice (lightroom, photoshop, whatever).

You’re aiming to try and make sure the images are as clean as possible. Sharpen them, and where possible remove as much shadowing, dirt, or flares as possible. Discard any that are blurry or out of focus.

If your scene had a lot of strong directional light, this will be difficult. Shadows can be removed through some creative image manipulation, but this will increase your workload a lot. Ideally, shoot your object in a scene with lots of diffuse light so there are no obvious shadows. Otherwise, these can work their way into the final texture and make it difficult to use in-game.

3. Photogrammetry

The actual interesting bit. I use AgiSoft Photoscan. They appear to have since renamed it “metashape”. This is what I use to make the actual model + textures, but there are various other solutions that do similar things. I found this one the most accessible.

There’s a lengthy process that I won’t detail fully, but eventually you will wind up with your model:

Pretty neat!

4. Cleanup

We now have our model. It looks pretty great! But what about in this view?:

That is not a solid render. Take a look…

It’s a wireframe

Obviously this is far, far too high detail to be used in a game. So there are a few steps to take:

  1. Decimate the mesh. Reduce the vert count drastically.
  2. Retopologise in your program of choice
  3. Remove any parts of the model you don’t want to keep. In this example, that’d be the surrounding square of land near the stump.

We should end up with something significantly more efficient. And if we’ve done correctly, it shouldn’t really be noticeably different to the original:

One of these has 10,000 faces. And the other… 943,000. Can you tell them apart?

5. Texture and Finalising

Finally the object must be textured. Agisoft has extracted albedo data for us, so we have a texture we can use. But we need to make it PBR-friendly. For that, I use Substance B2M.

At this stage you can take extra efforts to tweak the appearance to your liking: removing marks or dirtying up the texture, creating tesselation maps, etc.

Once we’re done, we can go ahead and import it, and it’s in game with little difficulty:

With enough effort a fairly convincing collection of assets gets built up.