This project lets you DBC decode CAN data from your CANedge CAN/LIN data logger - and push the data into an InfluxDB database. From here, the data can be visualized in your own customized, open source Grafana dashboards.
For the full step-by-step guide to setting up your dashboard, see the CANedge intro.

We provide two options for integrating your CANedge data with Grafana dashboards:
The CANedge Grafana Backend app only processes data ‘when needed’ by an end user - and requires no database. It is ideal when you have large amounts of data - as you only process the data you need to visualize.
The CANedge InfluxDB Writer processes data in advance (e.g. periodically or on-file-upload) and writes the decoded data to a database. It is ideal if dashboard loading speed is critical - but with the downside that data is processed/stored even if it is not used.
For details incl. ‘pros & cons’, see our intro to telematics dashboards.
- easily load MF4 log files from local disk or S3 server
- fetch data from hardcoded time period - or automate with dynamic periods
- DBC-decode data and optionally extract specific signals
- optionally resample data to specific frequency
- optionally process multi-frame CAN data (ISO TP), incl. UDS, J1939, NMEA 2000
- write the data to your own InfluxDB time series database
In this section we detail how to deploy the app on a PC.
Note: We recommend to test the deployment with our sample data as the first step.
requirements.txt fileinputs.py with a text editor and add your InfluxDB Cloud detailspython -m venv env & env\Scripts\activate & pip install -r requirements.txt
python main.py
python -m venv env && source env/bin/activate && pip install -r requirements.txt
python main.py
Configuration/Plugins install TrackMapDashboards/Browse click Import and load the dashboard-template-sample-data.json from this repoYou should now see the sample data visualized in Grafana.
Note: To activate your virtual environment use env\Scripts\activate (Linux: source env/bin/activate)
LOG/ folder with your own LOG/ folder[device_id]/[session]/[split].MF4dbc_files folderdevices and dbc_paths in inputs.py to reflect your added log and DBC filesdays_offset = None to ensure your data is written at the correct datepython main.pydbc_files folderdbc_paths in inputs.py to reflect your added log and DBC filesdevices in inputs.py to reflect your S3 structure i.e. ["bucket/device_id"]days_offset = None to ensure your data is written at the correct dateinputs.py with your S3 server and set s3 = TrueNote: You may want to modify other variables like adding signal filters, changing the resampling or modifying the default start date.
dashboard-template-simple.json to visualize your own dataOnce you’ve verified that your data is uploaded correctly, you can move on to automating it. See the CANedge intro for details.
If you need to delete data in InfluxDB that you e.g. uploaded as part of a test, you can use the delete_influx(name) function from the SetupInflux class. Call it by parsing the name of the ‘measurement’ to delete (i.e. the device ID): influx.delete_influx("958D2219")
You can easily process multi-frame data by setting the tp_type variable to "j1939", "uds" or "nmea" and adding the relevant DBC file. For example, you can test this for the sample data by adding the DBC "dbc_files/nissan_uds.dbc" and setting tp_type = "uds".
Note that if you use the paid InfluxDB cloud and a paid S3 server, we recommend that you monitor usage during your tests early on to ensure that no unexpected cost developments occur.