Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
How Advanced Computer Vision can help make cities smarter and more sustainable

How Advanced Computer Vision can help make cities smarter and more sustainable

How Advanced Computer Vision can help make cities smarter and more sustainable

According to TomTom’s 2021 Traffic Index, global congestion levels are still 10% lower than pre-pandemic.

Interestingly, figures also show that when traffic did start to return in the cities, it returned in a different form, showing a preference for private transport to support social distancing and with traffic density more spread out throughout the day as many of us transitioned to working from home. In addition, for the first time, the index also included data on emissions to better measure the impact of congestion.

Although traffic and congestion are expected to increase, the report highlights the need not for more roads, but for the infrastructure to be built more intelligently: “low emission zones, low traffic neighbourhoods and improved cycling infrastructure are all proving to have a positive effect on cities.”

Advanced AI and computer vision have already proved to play an important role in helping make cities smarter and more sustainable. Analytics Insights describes computer vision as the “eyes” of the city and being crucial in supporting smart city management such as smart traffic and bicycle monitoring.

Most vision-based sensors make use of traditional computer vision techniques. However, they tend to have two key limitations:

  • Background subtraction is unable to cope with bad weather and changing lighting conditions. Due to position sensitivity, these sensors cannot be mounted on street lighting without robust pylons or gantry, which dramatically increases installation costs.
  • HOG detection cannot identify objects that occupy the same line of vision and are therefore only effective in non-crowded fields of view. Neither of these techniques offer effective transport mode classification.

Using neural networks as our underlying computer vision detection, VivaCity provides a solution for all of these issues: providing a reliable solution in different light conditions and heavily crowded scenes. Their computer vision technology provides insights that contribute to the evolution of smart cities, helping to promote clean air, active travel initiatives and make cities better connected.

The AI-powered sensors anonymously detect and classify in real time all types of road users, from cars and buses to pedestrians and cyclists (and everything in between). From this data they calculate classified counts, granular paths, journey times, speeds, origin/destination, and near misses. VivaCity AI sensors provide data on urban movement which allows them to accurately monitor, assess and predict traffic flow.  Data derived from the technology can also bring about dynamic and positive adaptations in the live environment. For example, variable message signs can provide live traffic updates to road users in cities, and street lighting can also be changed based on the needs of road users.

Taking this one step further, next-generation intelligent traffic control systems, such as VivaCity’s Smart Junctions, are using data from these cutting-edge sensors to optimise traffic signals. This  paves the way towards adapting signal timings to help improve air quality or to prioritise active modes of transport, both areas where traditional traffic signal optimisation systems such as SCOOT (for multiple junctions) or MOVA (for individual junctions) have historically shown to be inadequate. In addition to providing rich, real-time data for real-time control, historic data from sensors is used to create a very precisely calibrated traffic microsimulation model to train the control algorithm, whereas current systems need frequent manual and expensive recalibration as their performance degrades over time.

And the applications for supporting road safety are also significant, with sensors able to capture interactions and near misses between vulnerable road users. At the recent Road Safety GB conference in Birmingham, VivaCity’s COO Peter Mildon highlighted how the question we should ask is no longer “can we gather good data on how roads are being used?” but more appropriately “can we quantify the benefits of road safety investment”. While road deaths data is hard to gather and often not enough to be of statistical significance, near miss data can be available in larger datasets.

This unlocks the possibility of statistical modelling to quantify how safe some roads are to better target one’s investment to ensure maximum impact. In the future VivaCity will also be able to use sensor data and insights to alert automated vehicles of nearby accidents or warn them about incidents that may impact them. With TechCrunch reporting more than 1,400 self-driving cars in the US already, over 80 companies testing them, and the global autonomous car market expected to reach USD 60 billion by 2030, it is particularly important to consider this type of vehicles now and learn how data can be collected and used to support road safety and innovation.

There is no doubt that, as more innovative technologies find their way into urban planning, we will have access to a growing amount of data that can help make our cities smarter and more sustainable. VivaCity are continuing to showcase the value that more, accurate, and better data can have – not just in theory but in practice – as they work closely with more than 80 cities and local authorities to help make their road networks safer, cleaner, and future-proofed.

Article by Raquel Velasco, Head of Product at VivaCity.

Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts

Post source : VivaCity

Related posts

Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts
Content Bottom Adverts