Compare commits
5 Commits
3182877094
...
6a5ec86505
| Author | SHA1 | Date | |
|---|---|---|---|
| 6a5ec86505 | |||
| e20f4b9f74 | |||
| 56e1f619e1 | |||
| 8930a36eaf | |||
| 194474fdff |
@@ -1,46 +1,140 @@
|
|||||||
# Power Consumption Logger Example
|
# Odroid PowerMate Logger and Plotter
|
||||||
|
|
||||||
Based on this script, you can monitor power consumption and implement graph plotting.
|
This directory contains two Python scripts to log power data from an Odroid PowerMate device and visualize it.
|
||||||
|
|
||||||
## How to Run the Script
|
1. `logger.py`: Connects to the device's web server, authenticates, and logs real-time power data from its WebSocket to a CSV file.
|
||||||
|
2. `csv_2_plot.py`: Reads the generated CSV file and creates a plot image of the power, voltage, and current data over time.
|
||||||
|
|
||||||
### Install Python Virtual Environment
|
## Prerequisites
|
||||||
|
|
||||||
```shell
|
### 1. Clone this example
|
||||||
|
```bash
|
||||||
git clone https://github.com/hardkernel/odroid-powermate.git
|
git clone https://github.com/hardkernel/odroid-powermate.git
|
||||||
cd odroid-powermate/example/logger
|
cd odroid-powermate/example/logger
|
||||||
```
|
```
|
||||||
|
|
||||||
```shell
|
### 2. Python and Virtual Environment
|
||||||
sudo apt install virtualenv
|
|
||||||
virtualenv venv
|
It is highly recommended to use a Python virtual environment to manage project dependencies and avoid conflicts with other projects.
|
||||||
source venv/bin/activate
|
|
||||||
|
Ensure you have Python 3 installed.
|
||||||
|
|
||||||
|
1. **Create a virtual environment:**
|
||||||
|
Open your terminal in this directory and run:
|
||||||
|
```bash
|
||||||
|
python3 -m venv venv
|
||||||
|
```
|
||||||
|
This will create a `venv` directory containing the Python interpreter and libraries.
|
||||||
|
|
||||||
|
2. **Activate the virtual environment:**
|
||||||
|
* **On Windows:**
|
||||||
|
```powershell
|
||||||
|
.\venv\Scripts\activate
|
||||||
|
```
|
||||||
|
* **On macOS and Linux:**
|
||||||
|
```bash
|
||||||
|
source venv/bin/activate
|
||||||
|
```
|
||||||
|
Your terminal prompt should now show `(venv)` at the beginning, indicating that the virtual environment is active.
|
||||||
|
|
||||||
|
### 3. Install Required Libraries
|
||||||
|
|
||||||
|
With the virtual environment activated, install the necessary Python packages:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip3 install requests websockets protobuf pandas matplotlib python-dateutil
|
||||||
```
|
```
|
||||||
|
|
||||||
### Install require package
|
### 4. Protobuf Generated File
|
||||||
|
|
||||||
```shell
|
The `logger.py` script uses Google Protocol Buffers (Protobuf) to decode real-time data from the WebSocket. This requires a Python file, `status_pb2.py`, which is generated from a Protobuf definition file (`status.proto`).
|
||||||
pip install grpcio-tools requests websockets protobuf pandas matplotlib
|
|
||||||
|
**How to Generate `status_pb2.py`:**
|
||||||
|
|
||||||
|
1. **Install Protobuf Compiler Tools:**
|
||||||
|
You need the `grpcio-tools` package, which includes the `protoc` compiler and Python plugins. You can install it via pip:
|
||||||
|
```bash
|
||||||
|
pip3 install grpcio-tools
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Locate the `.proto` file:**
|
||||||
|
Ensure you have the `status.proto` file in the current directory. This file defines the structure of the data messages.
|
||||||
|
|
||||||
|
3. **Run the Compiler:**
|
||||||
|
Execute the following command in your terminal. This command tells `protoc` to look for `status.proto` in the directory (`-I../../proto`) and generate the Python output file (`--python_out=.`) in the same place.
|
||||||
|
```bash
|
||||||
|
python3 -m grpc_tools.protoc -I../../proto --python_out=. status.proto
|
||||||
|
```
|
||||||
|
After running this command, the `status_pb2.py` file will be created, and `logger.py` will be able to use it.
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
The process is a two-step workflow: first log the data, then plot it.
|
||||||
|
|
||||||
|
### Step 1: Log Power Data with `logger.py`
|
||||||
|
|
||||||
|
Run `logger.py` to connect to your Odroid Smart Power device and save the data to a CSV file.
|
||||||
|
|
||||||
|
**Syntax:**
|
||||||
|
```bash
|
||||||
|
python3 logger.py <host> -u <username> -p <password> -o <output_file.csv>
|
||||||
```
|
```
|
||||||
|
|
||||||
### Build `status_pb2.py`
|
**Arguments:**
|
||||||
|
* `host`: The IP address or hostname of the Odroid Smart Power device (e.g., `192.168.1.50`).
|
||||||
|
* `-u`, `--username`: The username for logging in.
|
||||||
|
* `-p`, `--password`: The password for logging in.
|
||||||
|
* `-o`, `--output`: The path to save the output CSV file. This is required if you want to generate a plot.
|
||||||
|
|
||||||
```shell
|
**Example:**
|
||||||
python -m grpc_tools.protoc -I ../../proto --python_out=. status.proto
|
|
||||||
|
This command will log in and save the power data to `power_log.csv`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 logger.py 192.168.1.50 -u admin -p mypassword -o power_log.csv
|
||||||
```
|
```
|
||||||
|
|
||||||
### Execute script
|
The script will continue to log data until you stop it with `Ctrl+C`.
|
||||||
|
|
||||||
#### Power consumption collection
|
### Step 2: Generate a Plot with `csv_2_plot.py`
|
||||||
```shell
|
|
||||||
# python3 logger.py -u <username> -o <name.csv> -p <password> <address>
|
Once you have a CSV log file, you can use `csv_2_plot.py` to create a visual graph.
|
||||||
python3 logger.py -u admin -p password -o test.csv 192.168.30.5
|
You can also use the csv file recorded from PowerMate Web.
|
||||||
|
|
||||||
|
**Syntax:**
|
||||||
|
```bash
|
||||||
|
python3 csv_2_plot.py <input.csv> <output.png> [options]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Plot data
|
**Arguments:**
|
||||||
|
* `input_csv`: The path to the CSV file generated by `logger.py`.
|
||||||
|
* `output_image`: The path to save the output plot image (e.g., `plot.png`).
|
||||||
|
|
||||||
```shell
|
**Optional Arguments:**
|
||||||
python3 csv_2_plot.py test.csv plot.png [--type power voltage current] [--source vin main usb]
|
* `-t`, `--type`: Specify which plots to generate. Choices are `power`, `voltage`, `current`. Default is all three.
|
||||||
|
* `-s`, `--source`: Specify which power sources to include. Choices are `vin`, `main`, `usb`. Default is all three.
|
||||||
|
|
||||||
|
**Example 1: Default Plot**
|
||||||
|
|
||||||
|
This command reads `power_log.csv` and generates a plot containing power, voltage, and current for all sources, saving it as `power_graph.png`.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 csv_2_plot.py power_log.csv power_graph.png
|
||||||
```
|
```
|
||||||
|
|
||||||

|
**Example 2: Custom Plot**
|
||||||
|
|
||||||
|
This command generates a plot showing only the **power** and **current** for the **MAIN** and **USB** sources.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# main, usb power consumption
|
||||||
|
python csv_2_plot.py power_log.csv custom_plot.png --type power --source main usb
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example Output
|
||||||
|
|
||||||
|
Running the plot script will generate an image file similar to this:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The 5-unit scale is highlighted with a blue dotted line, and the 10-unit scale is highlighted with a red dotted line.
|
||||||
@@ -3,6 +3,7 @@ import matplotlib.dates as mdates
|
|||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
import pandas as pd
|
import pandas as pd
|
||||||
from dateutil.tz import gettz
|
from dateutil.tz import gettz
|
||||||
|
from matplotlib.ticker import MultipleLocator
|
||||||
|
|
||||||
|
|
||||||
def plot_power_data(csv_path, output_path, plot_types, sources):
|
def plot_power_data(csv_path, output_path, plot_types, sources):
|
||||||
@@ -19,16 +20,11 @@ def plot_power_data(csv_path, output_path, plot_types, sources):
|
|||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
# Read the CSV file into a pandas DataFrame
|
# Read the CSV file into a pandas DataFrame
|
||||||
# The 'timestamp' column is parsed as dates. Pandas automatically recognizes
|
|
||||||
# the ISO format (with 'Z') as UTC.
|
|
||||||
df = pd.read_csv(csv_path, parse_dates=['timestamp'])
|
df = pd.read_csv(csv_path, parse_dates=['timestamp'])
|
||||||
print(f"Successfully loaded {len(df)} records from '{csv_path}'")
|
print(f"Successfully loaded {len(df)} records from '{csv_path}'")
|
||||||
|
|
||||||
# --- Timezone Conversion ---
|
# --- Timezone Conversion ---
|
||||||
# Get the system's local timezone
|
|
||||||
local_tz = gettz()
|
local_tz = gettz()
|
||||||
# The timestamp from CSV is already UTC-aware.
|
|
||||||
# Convert it to the system's local timezone for plotting.
|
|
||||||
df['timestamp'] = df['timestamp'].dt.tz_convert(local_tz)
|
df['timestamp'] = df['timestamp'].dt.tz_convert(local_tz)
|
||||||
print(f"Timestamp converted to local timezone: {local_tz}")
|
print(f"Timestamp converted to local timezone: {local_tz}")
|
||||||
|
|
||||||
@@ -39,24 +35,32 @@ def plot_power_data(csv_path, output_path, plot_types, sources):
|
|||||||
print(f"An error occurred while reading the CSV file: {e}")
|
print(f"An error occurred while reading the CSV file: {e}")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# --- Calculate Average Interval ---
|
||||||
|
avg_interval_ms = 0
|
||||||
|
if len(df) > 1:
|
||||||
|
avg_interval = df['timestamp'].diff().mean()
|
||||||
|
avg_interval_ms = avg_interval.total_seconds() * 1000
|
||||||
|
|
||||||
|
# --- Calculate Average Voltages ---
|
||||||
|
avg_voltages = {}
|
||||||
|
for source in sources:
|
||||||
|
voltage_col = f'{source}_voltage'
|
||||||
|
if voltage_col in df.columns:
|
||||||
|
avg_voltages[source] = df[voltage_col].mean()
|
||||||
|
|
||||||
# --- Plotting Configuration ---
|
# --- Plotting Configuration ---
|
||||||
# Y-axis scale settings from chart.js
|
|
||||||
scale_config = {
|
scale_config = {
|
||||||
'power': {'steps': [5, 20, 50, 160]},
|
'power': {'steps': [5, 20, 50, 160]},
|
||||||
'voltage': {'steps': [5, 10, 15, 25]},
|
'voltage': {'steps': [5, 10, 15, 25]},
|
||||||
'current': {'steps': [1, 2.5, 5, 10]}
|
'current': {'steps': [1, 2.5, 5, 10]}
|
||||||
}
|
}
|
||||||
|
|
||||||
plot_configs = {
|
plot_configs = {
|
||||||
'power': {'title': 'Power Consumption', 'ylabel': 'Power (W)',
|
'power': {'title': 'Power Consumption', 'ylabel': 'Power (W)', 'cols': [f'{s}_power' for s in sources]},
|
||||||
'cols': [f'{s}_power' for s in sources]},
|
'voltage': {'title': 'Voltage', 'ylabel': 'Voltage (V)', 'cols': [f'{s}_voltage' for s in sources]},
|
||||||
'voltage': {'title': 'Voltage', 'ylabel': 'Voltage (V)',
|
'current': {'title': 'Current', 'ylabel': 'Current (A)', 'cols': [f'{s}_current' for s in sources]}
|
||||||
'cols': [f'{s}_voltage' for s in sources]},
|
|
||||||
'current': {'title': 'Current', 'ylabel': 'Current (A)',
|
|
||||||
'cols': [f'{s}_current' for s in sources]}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
channel_labels = [s.upper() for s in sources]
|
channel_labels = [s.upper() for s in sources]
|
||||||
# Define a color map for all possible sources
|
|
||||||
color_map = {'vin': 'red', 'main': 'green', 'usb': 'blue'}
|
color_map = {'vin': 'red', 'main': 'green', 'usb': 'blue'}
|
||||||
channel_colors = [color_map[s] for s in sources]
|
channel_colors = [color_map[s] for s in sources]
|
||||||
|
|
||||||
@@ -65,20 +69,17 @@ def plot_power_data(csv_path, output_path, plot_types, sources):
|
|||||||
print("No plot types selected. Exiting.")
|
print("No plot types selected. Exiting.")
|
||||||
return
|
return
|
||||||
|
|
||||||
# Create a figure and a set of subplots based on the number of selected plot types.
|
fig, axes = plt.subplots(num_plots, 1, figsize=(15, 9 * num_plots), sharex=True, squeeze=False)
|
||||||
fig, axes = plt.subplots(num_plots, 1, figsize=(15, 6 * num_plots), sharex=True, squeeze=False)
|
axes = axes.flatten()
|
||||||
axes = axes.flatten() # Flatten the 2D array to 1D for easier iteration
|
|
||||||
|
|
||||||
# --- Loop through selected plot types and generate plots ---
|
# --- Loop through selected plot types and generate plots ---
|
||||||
for i, plot_type in enumerate(plot_types):
|
for i, plot_type in enumerate(plot_types):
|
||||||
ax = axes[i]
|
ax = axes[i]
|
||||||
config = plot_configs[plot_type]
|
config = plot_configs[plot_type]
|
||||||
|
|
||||||
max_data_value = 0
|
max_data_value = 0
|
||||||
for j, col_name in enumerate(config['cols']):
|
for j, col_name in enumerate(config['cols']):
|
||||||
if col_name in df.columns:
|
if col_name in df.columns:
|
||||||
ax.plot(df['timestamp'], df[col_name], label=channel_labels[j], color=channel_colors[j])
|
ax.plot(df['timestamp'], df[col_name], label=channel_labels[j], color=channel_colors[j], zorder=2)
|
||||||
# Find the maximum value in the current column to set the y-axis limit
|
|
||||||
max_col_value = df[col_name].max()
|
max_col_value = df[col_name].max()
|
||||||
if max_col_value > max_data_value:
|
if max_col_value > max_data_value:
|
||||||
max_data_value = max_col_value
|
max_data_value = max_col_value
|
||||||
@@ -86,34 +87,82 @@ def plot_power_data(csv_path, output_path, plot_types, sources):
|
|||||||
print(f"Warning: Column '{col_name}' not found in CSV. Skipping.")
|
print(f"Warning: Column '{col_name}' not found in CSV. Skipping.")
|
||||||
|
|
||||||
# --- Dynamic Y-axis Scaling ---
|
# --- Dynamic Y-axis Scaling ---
|
||||||
ax.set_ylim(bottom=0) # Set y-axis minimum to 0
|
ax.set_ylim(bottom=0)
|
||||||
if plot_type in scale_config:
|
if plot_type in scale_config:
|
||||||
steps = scale_config[plot_type]['steps']
|
steps = scale_config[plot_type]['steps']
|
||||||
# Find the smallest step that is >= max_data_value
|
|
||||||
new_max = next((step for step in steps if step >= max_data_value), steps[-1])
|
new_max = next((step for step in steps if step >= max_data_value), steps[-1])
|
||||||
ax.set_ylim(top=new_max)
|
ax.set_ylim(top=new_max)
|
||||||
|
|
||||||
ax.set_title(config['title'])
|
ax.set_title(config['title'])
|
||||||
ax.set_ylabel(config['ylabel'])
|
ax.set_ylabel(config['ylabel'])
|
||||||
ax.legend()
|
ax.legend()
|
||||||
ax.grid(True, which='both', linestyle='--', linewidth=0.5)
|
|
||||||
|
# --- Grid and Tick Configuration ---
|
||||||
|
y_min, y_max = ax.get_ylim()
|
||||||
|
|
||||||
|
# Keep the dynamic major_interval logic for tick LABELS
|
||||||
|
if plot_type == 'current' and y_max <= 2.5:
|
||||||
|
major_interval = 0.5
|
||||||
|
elif y_max <= 10:
|
||||||
|
major_interval = 2
|
||||||
|
elif y_max <= 25:
|
||||||
|
major_interval = 5
|
||||||
|
else:
|
||||||
|
major_interval = y_max / 5.0
|
||||||
|
|
||||||
|
ax.yaxis.set_major_locator(MultipleLocator(major_interval))
|
||||||
|
ax.yaxis.set_minor_locator(MultipleLocator(1))
|
||||||
|
|
||||||
|
# Disable the default major grid, but keep the minor one
|
||||||
|
ax.yaxis.grid(False, which='major')
|
||||||
|
ax.yaxis.grid(True, which='minor', linestyle='--', linewidth=0.6, zorder=0)
|
||||||
|
|
||||||
|
# Draw custom lines for 5 and 10 multiples, which are now the only major grid lines
|
||||||
|
for y_val in range(int(y_min), int(y_max) + 1):
|
||||||
|
if y_val == 0: continue
|
||||||
|
if y_val % 10 == 0:
|
||||||
|
ax.axhline(y=y_val, color='maroon', linestyle='--', linewidth=1.2, zorder=1)
|
||||||
|
elif y_val % 5 == 0:
|
||||||
|
ax.axhline(y=y_val, color='midnightblue', linestyle='--', linewidth=1.2, zorder=1)
|
||||||
|
|
||||||
|
# Keep the x-axis grid
|
||||||
|
ax.xaxis.grid(True, which='major', linestyle='--', linewidth=0.8)
|
||||||
|
|
||||||
# --- Formatting the x-axis (Time) ---
|
# --- Formatting the x-axis (Time) ---
|
||||||
local_tz = gettz()
|
local_tz = gettz()
|
||||||
last_ax = axes[-1]
|
last_ax = axes[-1]
|
||||||
# Pass the timezone to the formatter
|
|
||||||
|
if not df.empty:
|
||||||
|
last_ax.set_xlim(df['timestamp'].iloc[0], df['timestamp'].iloc[-1])
|
||||||
|
|
||||||
last_ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M:%S', tz=local_tz))
|
last_ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M:%S', tz=local_tz))
|
||||||
last_ax.xaxis.set_major_locator(plt.MaxNLocator(15)) # Limit the number of ticks
|
last_ax.xaxis.set_major_locator(plt.MaxNLocator(15))
|
||||||
plt.xlabel(f'Time ({local_tz.tzname(df["timestamp"].iloc[-1])})') # Display timezone name
|
plt.xlabel(f'Time ({local_tz.tzname(df["timestamp"].iloc[-1])})')
|
||||||
plt.xticks(rotation=45)
|
plt.xticks(rotation=45)
|
||||||
|
|
||||||
# Add a main title to the figure
|
# --- Add a main title and subtitle ---
|
||||||
start_time = df['timestamp'].iloc[0].strftime('%Y-%m-%d %H:%M:%S')
|
start_time = df['timestamp'].iloc[0].strftime('%Y-%m-%d %H:%M:%S')
|
||||||
end_time = df['timestamp'].iloc[-1].strftime('%H:%M:%S')
|
end_time = df['timestamp'].iloc[-1].strftime('%H:%M:%S')
|
||||||
fig.suptitle(f'PowerMate Log ({start_time} to {end_time})', fontsize=16, y=0.95)
|
main_title = f'PowerMate Log ({start_time} to {end_time})'
|
||||||
|
|
||||||
# Adjust layout to prevent titles/labels from overlapping
|
subtitle_parts = []
|
||||||
plt.tight_layout(rect=[0, 0, 1, 0.94])
|
if avg_interval_ms > 0:
|
||||||
|
subtitle_parts.append(f'Avg. Interval: {avg_interval_ms:.2f} ms')
|
||||||
|
|
||||||
|
voltage_strings = [f'{source.upper()} Avg: {avg_v:.2f} V' for source, avg_v in avg_voltages.items()]
|
||||||
|
if voltage_strings:
|
||||||
|
subtitle_parts.extend(voltage_strings)
|
||||||
|
|
||||||
|
subtitle = ' | '.join(subtitle_parts)
|
||||||
|
|
||||||
|
full_title = main_title
|
||||||
|
if subtitle:
|
||||||
|
full_title += f'\n{subtitle}'
|
||||||
|
|
||||||
|
fig.suptitle(full_title, fontsize=14)
|
||||||
|
|
||||||
|
# Adjust layout to make space for the subtitle
|
||||||
|
plt.tight_layout(rect=[0, 0, 1, 0.98])
|
||||||
|
|
||||||
# --- Save the plot to a file ---
|
# --- Save the plot to a file ---
|
||||||
try:
|
try:
|
||||||
|
|||||||
BIN
example/logger/img/plot.png
Normal file
BIN
example/logger/img/plot.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 239 KiB |
@@ -3,7 +3,7 @@ import asyncio
|
|||||||
import csv
|
import csv
|
||||||
import requests
|
import requests
|
||||||
import websockets
|
import websockets
|
||||||
from datetime import datetime
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
# Import the status_pb2.py file generated by `protoc`.
|
# Import the status_pb2.py file generated by `protoc`.
|
||||||
# This file must be in the same directory as logger.py.
|
# This file must be in the same directory as logger.py.
|
||||||
@@ -68,7 +68,7 @@ class OdroidPowerLogger:
|
|||||||
csv_file = open(self.output_file, 'w', newline='', encoding='utf-8')
|
csv_file = open(self.output_file, 'w', newline='', encoding='utf-8')
|
||||||
csv_writer = csv.writer(csv_file)
|
csv_writer = csv.writer(csv_file)
|
||||||
|
|
||||||
# Write header - matches main.js and csv_2_plot.py expectations
|
# Write header
|
||||||
header = [
|
header = [
|
||||||
'timestamp', 'uptime_ms',
|
'timestamp', 'uptime_ms',
|
||||||
'vin_voltage', 'vin_current', 'vin_power',
|
'vin_voltage', 'vin_current', 'vin_power',
|
||||||
@@ -97,24 +97,22 @@ class OdroidPowerLogger:
|
|||||||
# Process only if the payload type is 'sensor_data'
|
# Process only if the payload type is 'sensor_data'
|
||||||
if status_message.WhichOneof('payload') == 'sensor_data':
|
if status_message.WhichOneof('payload') == 'sensor_data':
|
||||||
sensor_data = status_message.sensor_data
|
sensor_data = status_message.sensor_data
|
||||||
|
ts_dt = datetime.fromtimestamp(sensor_data.timestamp_ms / 1000, tz=timezone.utc)
|
||||||
# Format timestamp to ISO format with 'Z' for UTC, matching main.js
|
ts_str_print = ts_dt.strftime('%Y-%m-%d %H:%M:%S UTC')
|
||||||
ts_dt = datetime.fromtimestamp(sensor_data.timestamp_ms / 1000)
|
|
||||||
ts_iso = ts_dt.isoformat(timespec='milliseconds') + 'Z'
|
|
||||||
|
|
||||||
# Print data for console output (can be adjusted if needed)
|
print(f"--- {ts_str_print} (Uptime: {sensor_data.uptime_ms / 1000}s) ---")
|
||||||
print(f"--- {ts_iso} (Uptime: {sensor_data.uptime_ms / 1000:.3f}s) ---")
|
|
||||||
|
# Print data for each channel
|
||||||
for name, channel in [('VIN', sensor_data.vin), ('MAIN', sensor_data.main),
|
for name, channel in [('VIN', sensor_data.vin), ('MAIN', sensor_data.main),
|
||||||
('USB', sensor_data.usb)]:
|
('USB', sensor_data.usb)]:
|
||||||
print(
|
print(
|
||||||
f" {name:<4}: {channel.voltage:.3f} V | {channel.current:.3f} A | {channel.power:.3f} W")
|
f" {name:<4}: {channel.voltage:5.2f} V | {channel.current:5.3f} A | {channel.power:5.2f} W")
|
||||||
|
|
||||||
# Write to CSV if enabled
|
# Write to CSV if enabled
|
||||||
if csv_writer:
|
if csv_writer:
|
||||||
# Format numerical values to 3 decimal places, matching main.js
|
ts_iso_csv = ts_dt.isoformat(timespec='milliseconds').replace('+00:00', 'Z')
|
||||||
row = [
|
row = [
|
||||||
ts_iso,
|
ts_iso_csv, sensor_data.uptime_ms,
|
||||||
sensor_data.uptime_ms,
|
|
||||||
f"{sensor_data.vin.voltage:.3f}", f"{sensor_data.vin.current:.3f}", f"{sensor_data.vin.power:.3f}",
|
f"{sensor_data.vin.voltage:.3f}", f"{sensor_data.vin.current:.3f}", f"{sensor_data.vin.power:.3f}",
|
||||||
f"{sensor_data.main.voltage:.3f}", f"{sensor_data.main.current:.3f}", f"{sensor_data.main.power:.3f}",
|
f"{sensor_data.main.voltage:.3f}", f"{sensor_data.main.current:.3f}", f"{sensor_data.main.power:.3f}",
|
||||||
f"{sensor_data.usb.voltage:.3f}", f"{sensor_data.usb.current:.3f}", f"{sensor_data.usb.power:.3f}"
|
f"{sensor_data.usb.voltage:.3f}", f"{sensor_data.usb.current:.3f}", f"{sensor_data.usb.power:.3f}"
|
||||||
|
|||||||
Binary file not shown.
|
Before Width: | Height: | Size: 272 KiB |
Reference in New Issue
Block a user