Sectors

Company

English
English

Sectors

Company

English

🧠 The city that understands

🧠 The city that understands

🧠 The city that understands

View of a bustling urban intersection with colorful frames around pedestrians and vehicles, illustrating the contextual analysis of public space.
View of a bustling urban intersection with colorful frames around pedestrians and vehicles, illustrating the contextual analysis of public space.
View of a bustling urban intersection with colorful frames around pedestrians and vehicles, illustrating the contextual analysis of public space.

What if cities were no longer just seeing, but finally understanding what they are looking at?

This is the whole promise of computer vision applied to the smart city: moving from a monitored space to a readable territory.
Not to "control", but to live better together.

Because public space is under pressure. Cities are becoming crossroads of multiple interests: residents, tourists, workers, logistical flows, soft mobility, events, construction sites, shared mobility… and each has its own logic of use, often conflicting.

As a result: constant tensions, difficult to anticipate with traditional observation methods. It is no longer a question of human resources, but of the ability to understand what is happening, in real-time, with finesse.

This is where CORE changes everything.

CORE does not capture an image. It captures a usage.

One of the major contributions of CORE is its ability to restore context to urban space. It is not a video surveillance system. It is an interpreter. It does not see a car, it understands if it is blocking a fire access. It does not see a pedestrian, it understands if they are waiting, crossing, or if a queue is forming.

This semantic difference changes everything.

The cameras are already there. But without visual intelligence, they are only useful afterwards, in the context of investigations or replays of incidents. CORE, on the other hand, allows for proactive reading: it identifies weak signals, triggers smart alerts, measures the effects of a municipal action.

Example: after the pedestrianization of a street, have we really observed a decrease in motorized traffic? Have movements just shifted elsewhere? What new tensions has this generated?
With CORE, the city can rely on dynamic behavioral data, and not on one-time surveys.

At the service of residents: limit nuisances and invisible frictions

Many incivilities in the city are not spectacular. They are small daily infractions, that residents experience as a silent degradation of their living environment:

  • Repeated parking on sidewalks.


  • Delivery columns obstructing bike lanes.


  • Queues from fast-food outlets or administrations spilling into public space.


  • Play areas occupied by scooters.


  • Illegitimate occupancy of space at night.

These micro-phenomena are difficult to observe continuously, as they often exist for only a few minutes, several times a day. Yet, they exhaust the daily lives of residents.

CORE allows for objectifying these realities. To alert at the right moment. To produce precise statistics, area by area, without overwhelming the supervision teams.

And tomorrow? Use cases still underutilized

💡 Analysis of the usage of urban furniture: benches, bike racks, bus shelters… Are they being used? By whom? At what times? This can redirect investments, adapt furniture to real uses (and not presumed).

💡 Detection of the city's “dead zones”: poorly frequented, anxiety-inducing, poorly lit places. By cross-referencing pedestrian flows with times, CORE can help reactivate these spaces, maintain them better, or rearrange lighting.

💡 Responsiveness to ephemeral nuisances: urban rodeos, nighttime noise, party overflow, CORE can act as a behavioral radar, coupled with sound or light alerts operated remotely.

An intelligence at the service of the right balance

The smart city will not be the one that monitors everything. It will be the one that understands well enough to act with nuance.

CORE does not replace human decision-making: it improves its quality. It gives each community the means to base its actions on facts, not feelings. And to do so without ethical compromise: no facial recognition, no commercial exploitation of images, no unnecessary storage.

Simply put: an augmented reading of public space, for more targeted interventions, fairer developments, and a more livable city for everyone.

Smart city, yes. But above all, an understanding city.