To help you navigate this article we have created links to the specific sections of this article for your convenience.
Table of contents:
There are times when FLOW's default settings just won't cut it and you'd like even more customization for your analytics. Luckily, there are many more aspects you can tweak apart from the Analytic view, canvas, and dashboard. This article will walk you through the available settings that can be defined for each analytics separately. To learn how to change settings for the FLOW BLOCK go here.
To get to an analytics' settings, select Analytics in the left menu, then select the specific analytics you want to set up. New items should roll down in the left menu with Analytics settings among them. Your account must have the Analyst role or higher for the corresponding analytics in order to be able to see its settings. Now, let's see each section in detail.
FLOW is able to periodically save the cumulative numbers stored in operators and widgets. In case of the FLOW Block's sudden shutdown, it will load the last saved data for each analytics on its next launch. Because the trajectory cache isn't included in the autosaves, each analytics' cache will be empty on launch. However, thanks to the fact that the operator and widgets' cumulative values have been stored, the output of sinks and widgets will be preserved if they are of the following types: count, statistical value, or distribution. Therefore, the effect is the same as when widgets cumulate values from trajectories that have been deleted from the cache due to space constraints, except, in this case, the cache is completely empty and will start to fill up after the FLOW Block's launch. The newly detected trajectories will then cumulate normally with the widgets' previous values. More info on caching can be found here.
Autosave settings allow you to set the time interval, or turn this feature off.
When choosing a time interval, bear in mind that each autosave somewhat wears down the memory of the device that runs the FLOW Block. The size of each save file heavily depends on the setup of the analytics, but it's not uncommon to see 200 MB files. This takes a toll especially if you're running FLOW on TrafficEmbedded.
This setting features a single button that deletes the whole trajectory cache and the status of all operators and widgets. The result is as if the analytics was newly created, except for the various filters and the layout of the canvas will be preserved. See the previous section for details about the cache.
This section lets you set the WGS 84 coordinates of your camera. Right now, it's for your informational purposes only and isn't reflected in further calculations or communication. However, it can help you identify the camera when using the public API.
Georeferencing allows you to transform pixel world data into real-world data. By assigning real-world coordinates to the scene, you georegister the footage which lets you see the speed of traffic objects in km/h. Georegistration is quite complex and therefore has its own article, which you can find here.
Region of interest—image cropping
Region of interest allows you to define a rectangular area where the video analysis will be applied. All parts outside this region will be ignored. This feature helps you detect and track smaller objects in the specific part of a high-resolution video stream, effectively serving as a digital zoom for the detector. Defining a region of interest does not increase the number of frames that the detector can process per second.
To draw a zone, select the icon in the upper left corner of the camera view.
By defining detection zones, you parametrize the object detector for specific categories, i.e. you can set the minimum and maximum size of each side of detectable objects' bounding box as well as the minimum level of confidence for each zone. This feature allows you to avoid false detections in problematic parts of the image which can for example be figurines in shopping windows or cars on billboards.
Each zone can be set up individually with its own rules. If two zones overlap, only the rules of the last zone, i.e. closest to the bottom in the zone table, will be applied.
To illustrate the usefulness of setting a minimum and maximum object size, consider the following example: There's a billboard in the camera view that features a car whose size is much larger than that of real cars in the scene. Normally, the detector would consider the car in the billboard as a valid traffic object. You can solve this problem by setting a global upper limit to the object size that is larger than real traffic objects but smaller than the car in the billboard.
Selecting categories tells the detector which types of traffic objects it should let pass and which not. Therefore a solution to the previous billboard example would also be to draw a zone over the billboard and turn off car detection in it. Another example would be reflective surfaces like a side of a glass building—simply draw a zone over them and turn off any detections there.
Finally, let's focus on the confidence settings. Detector confidence is not as straightforward as it might seem—don't presume that passing only detections with high confidence will get you good results. Even the lower-confidence detections are important. To use this setting safely, first, observe the scene and try to determine the average confidence of the traffic objects in it. To see the confidence, go to the analytics camera view and enable its display in the bottom left corner. You'll then see the score on each object's flag.
If you see a false detection with a much lower score than the average, you can filter it out by setting the minimum confidence to a value somewhere between this object's score and the average score. After you set it, carefully observe the scene for cases where a valid traffic object hasn't been detected (has no flag or trajectory trail). If you see such an object, you may have set the minimum confidence too high.
Keep in mind that the detector has a certain intrinsic minimum confidence whose exact value depends on the analytic engine used. Usually, it's somewhere between 5% and 25%. Therefore you'll never see objects with a lower score, even if you set the confidence to 0.
Static anonymization lets you draw freehand zones in the Analytics view. The whole area of these zones will be either covered in solid color, pixelized, or blurred. You can customize these methods in the table under the camera view. When two anonymized areas overlap each other, the harsher anonymization method takes priority over the other one. Solid color is considered the harshest, followed by pixelization and blur.
Dynamic anonymization changes only parts of the image based on its contents. Select categories of traffic objects that you want to anonymize. You can also choose to anonymize only their specific parts.
When license plate anonymization is turned on and e.g. a car is detected but not its license plate position, anonymization will take place at the presumed area of the license plate, which is at the lower third of the vehicle. With faces, it's the upper third of the pedestrian if their face wasn't detected.
You can use static anonymization together with the dynamic one.
FLOW works with the anonymized images on a very low level in the FLOW Node, which means there is no way for anyone to obtain non-anonymized images, even for the FLOW Block.
This lets you define a time window for the predictive filtering algorithm to refine trajectories. A shorter period reduces latency and is ideal for time-critical applications, while a longer time frame results in smooth and continuous trajectories even when traffic objects pass behind obstacles. The lower the value the faster response. 100 ms can be set for super-fast response times which are needed for example when turning on light signs to prevent wrong-way driving. The default value is 2000 ms and it is better to leave it there for higher quality results that are not needed super fast.
If the connection between FLOW Insights and the FLOW Block is slow, this is where you can lower the frequency with which they exchange data and the picture quality.
These values mostly regard the communication between FLOW Insights and FLOW Block. Light and heavy data (payloads) concern the difficulty of their transmission. Heavy data are trajectories and outputs of the trajectory and heatmap widgets, light data is everything else.
The FPS divider is directly related to the computation tick interval (CTI) mentioned in the next section. If the CTI is set to 100 ms, the detector analyzes 10 frames per second and sends them to FLOW Insights. If you reduce this number with the FPS divider, the video images will be sent to FLOW Insights with reduced frequency, but it doesn't influence how many FPS the detector actually processes. Here are some example values:
Computation tick interval [ms]
FPS processed by the detector
FPS displayed in FLOW Insights
As you can see, the FPS divider serves to reduce bandwidth usage between the FLOW Node (detector) and FLOW Insights, not to reduce the performance load of the machine running the FLOW Node.
Computation tick interval
This value concerns the FLOW Node instead of the Block. It defines how often it outputs images and trajectories and therefore how often it's evaluated by the Block, but lower values are more computationally demanding. This value directly influences the frame rate of the analytics' camera view, but this can be reduced with the FPS divider as illustrated in the previous section. The recommended value is 333 ms.
That's it for the various advanced FLOW settings. If you need further clarification or help, click the button on the bottom right to chat with us or contact us here. We're happy to help!