The Palos Publishing Company

Follow Us On The X Platform @PalosPublishing
Categories We Write About

How to design feedback collection in edge-deployed ML models

Designing feedback collection in edge-deployed ML models requires careful consideration of several factors, such as resource constraints, real-time data processing, user privacy, and connectivity challenges. Here’s how to approach the design:

1. Identify Feedback Sources

First, identify where and how feedback will be collected:

  • User Input: Users might provide direct feedback via an app or device interface (e.g., thumbs up/down, rating systems, text feedback).

  • Model Performance: Indirect feedback can come from observing the model’s outputs or actions, such as whether a recommendation or prediction was accepted or acted upon.

  • Environmental Data: The feedback can also come from sensors or logs, where you track model performance in real-world conditions.

2. Decouple Feedback from Core ML Pipeline

To avoid overloading the model with continuous data, feedback collection should be decoupled from the primary ML pipeline.

  • Buffering Mechanism: Store the feedback locally on the edge device until connectivity is restored (if necessary).

  • Asynchronous Feedback Collection: Collect feedback asynchronously, so it does not impact the real-time decision-making of the ML model.

3. Data Collection and Privacy Considerations

Ensure that any feedback collection adheres to privacy regulations (e.g., GDPR, CCPA) and ensures user anonymity.

  • Minimal Data Collection: Collect only essential feedback data (e.g., interaction, preferences, satisfaction) and avoid storing sensitive personal information unless explicitly necessary.

  • Data Anonymization: If the feedback involves user data, anonymize it to reduce privacy concerns.

4. Efficient Feedback Storage

Edge devices have limited storage and computational resources, so storing feedback efficiently is crucial.

  • Lightweight Storage Format: Use compact formats like JSON or Protocol Buffers for storing feedback. If feedback is text-heavy, consider summarizing or compressing it.

  • Buffering/Queuing System: Implement a local buffer or queue that temporarily stores the feedback until it can be uploaded to the cloud or central system for further processing.

5. Real-time or Batch Uploads

Decide whether to upload feedback in real-time or batch:

  • Real-time Feedback: Suitable if low-latency is required, but it can strain network resources.

  • Batch Feedback: Suitable for devices with intermittent or poor connectivity, where feedback is uploaded periodically (e.g., when the device connects to a network).

6. Edge Processing of Feedback

While feedback might not be directly tied to the ML model’s inference, it can be used to enhance the model or tune hyperparameters over time.

  • Local Feedback Processing: If applicable, pre-process or analyze feedback locally on the device. For instance, you might extract summary statistics (e.g., how often a recommendation was accepted) or detect performance anomalies.

  • Continuous Learning: In some scenarios, feedback might be used to update models in a lightweight manner on the edge, either by retraining or fine-tuning the model in a federated learning fashion.

7. Error Handling and Model Retraining Triggers

Feedback should not just inform you about how well the model is performing, but also trigger necessary actions:

  • Negative Feedback Triggers: If a threshold for negative feedback is exceeded (e.g., 80% of users find a prediction incorrect), this should trigger a review or even a model retraining.

  • Error Reporting: Implement mechanisms for reporting failures, such as a failed prediction or erroneous outputs, so that the model can be adjusted.

8. Feedback Visualization and Monitoring

For real-time monitoring, integrate feedback collection with an observability system:

  • Dashboard for Edge Devices: Visualize feedback metrics such as user interactions, satisfaction levels, and performance trends.

  • Alerting System: Implement alerting based on specific feedback thresholds (e.g., an increase in negative feedback triggers an alert to the ML team).

9. Model Calibration Based on Feedback

Once feedback is collected and uploaded to a central server:

  • Batch Model Updates: Use the feedback for model retraining or updating. This can be performed periodically based on the volume of feedback collected or triggered by significant shifts in user behavior.

  • Active Learning: Use feedback as part of an active learning loop, where the model is re-trained based on the most uncertain or informative feedback examples.

10. User Engagement for Continuous Improvement

To improve the model’s learning process and make users feel involved, consider ways to engage them:

  • Provide Feedback Acknowledgment: Notify users that their feedback was received and is being used to improve the system.

  • Request More Detailed Feedback: For users who give negative feedback, you can ask for more details, but be sure to keep this non-intrusive.

11. Edge-specific Design Considerations

  • Energy Efficiency: Edge devices may have power constraints, so ensure the feedback collection process is efficient in terms of battery usage.

  • Offline Functionality: Design the feedback collection system to work offline, as edge devices may not always have a reliable internet connection.

  • Compression Techniques: To save on storage and transmission costs, consider compressing feedback data before it is sent to the server.

12. Feedback-driven Model Evaluation

Use the feedback data not only for retraining but also for evaluating how well the model generalizes in the real world. If feedback shows consistent misclassifications or suboptimal results in certain situations, those can be marked as edge cases for model improvements.


In conclusion, designing an efficient and robust feedback collection system for edge-deployed ML models involves addressing privacy, resource limitations, and real-time requirements while ensuring that collected feedback can be used to continuously improve model performance.

Share this Page your favorite way: Click any app below to share.

Enter your email below to join The Palos Publishing Company Email List

We respect your email privacy

Categories We Write About