Frequently Asked Questions

FAQs for Stream Data Processing

Q: When configuring the running mode of a stream processing job, which running mode shall I choose, the Standalone Mode or the Cluster Mode?

A: For the Standalone Mode, the underlying resources cannot be horizontally expanded, so the running resources are limited. However, the resource utilization efficiency of the Standalone Mode is high, and it is suitable for processing data with small traffic. For the Cluster Mode, the underlying resources can be horizontally expanded to meet the resource requirement of stream processing jobs, and it is suitable for processing data with big traffic.

Q: How much resource shall I request for running a stream processing job?

A: You can refer to the Calculator Performance Description for the performance index of each calculator. The required resource can be estimated based on the used calculators, the data traffic size, and the running mode configuration of the stream processing job. The most recommended method is to simulate the production data flow in the test environment, adjust the running resources of the stream processing job according to the operation monitoring data, and then apply the corresponding configuration to the production environment.

Q: When I start a newly published stream processing job, the startup is failed. Why?

A: Several reasons might cause the startup failure of stream processing jobs. You can troubleshoot with the following steps.

  1. Ensure that your network connection is ready when you perform maintenance operations on the stream processing job.

  2. Ensure that the requested resource quota is enough for the running resource configuration of the stream processing job. You can request more resources or adjust the running resource configuration as needed.

  3. If stream processing system errors are reported, you can try restarting the stream processing job or contact the EnOS operation team.

Q: My stream processing job is started and running, but the calculated output is not generated as expected. Why?

A: The stream processing job is running, but no output data is found on the monitoring page. The situation might be caused by the following reasons.

  1. The configuration of the stream processing job is not correct, such as incorrect measurement point IDs.

  2. The input point data is not uploaded as expected, so no output data is generated.

  3. Required system pipelines are not started and running correctly, which caused the data consumption or data output failure.

  4. The output point is not registered in the asset model, so the calculated data cannot be stored normally.

Q: How many stream processing jobs can be created for an organization?

A: Currently, an organization can have at most 50 stream processing jobs.

FAQs for Time Series Data Management

Q: What preparation work is required before configuring TSDB storage policies?

A: Before configuring TSDB storage policies, you need to request for the Time Series Database resource for your organization. Otherwise, the configured TSDB storage policies will not take effect by default. To request for the Time Series Database resources, see Resource Management on EnOS.

Q: When should I configure TSDB storage policies?

A: It is recommended that you configure TSDB storage policies after your devices are connected to the IoT hub and before device data is ingested. Otherwise, the ingested data will not be stored in TSDB by default. If you want to store the data that is processed by the streaming engine, you must configure the TSDB storage policies for the processed data before the stream processing jobs start running.

Q: Will the TSDB storage policies take effect immediately after the configuration is saved?

A: The storage policies will take effect in about 5 minutes after the configuration is saved.

Q: How many storage policy groups can be created for an organization?

A: Currently, an organization can have at most 2 storage policy groups.

Q: Can attributes of models and measurement points that are associated with TSDB storage policies be modified?

A: When TSDB storage policies are configured, the associated measurement point IDs, measurement point types, and data types cannot be modified. Otherwise, the stored data cannot be retrieved with EnOS TSDB data service APIs.

Q: My devices are connected and have started uploading data to the cloud. Why could I not get data through the data service APIs?

A: After device connection, you need to configure TSDB storage policies for your device measurement points. Otherwise, the ingested data will not be stored in TSDB by default, and you cannot get the data with API.

Q: Can data stored in TSDB be deleted?

A: Data stored in TSDB can be deleted with the Data Deletion feature. For more information, see Deleting Data in TSDB.

Q: Can data stored in TSDB be archived?

A: Yes. Data stored in TSDB can be archived with the Data Archiving service.

FAQs for Data Subscription

Q: How many data subscription jobs can be created for an organization?

A: Currently, an organization can have at most 15 data subscription jobs.

Q: How many consumer groups are supported for a data subscription job? How many consumers are supported in a consumer group?

A: The number of consumer groups for a data subscription job is not limited, but a consumer group allows 2 consumers to consume to subscribed data at the same time.

Q: How long will subscribed real-time asset data be stored in Kafka topics?

A: By default, subscribed data will be stored in Kafka topics for 3 days. In case the data consumption stops temporarily, you can continue consuming the subscribed data within 3 days after the real-time data is subscribed.

FAQs for Data Archiving

Q: Do data archiving jobs support both automatic and manual modes?

A: The running of data archiving jobs is rule-driven. You need to configure data archiving jobs based on your business needs (such as where to store the archived data, which data to archive, and the archiving cycle). When a data archiving job is started and running, data will be archived according to the configuration without human intervention.


Currently, data archiving supports Real-Time and Offline job types. For real-time job type, the data archiving job will keep running. Once data is generated from the data source, the job will archive the data according to the configuration automatically. For offline job type, the job will run only once. After all the data specified in the configuration is archived, the job will stop running.

Q: What will be impacted if the configuration of a running data archiving job is modified?

A: After the data archiving job configuration is modified and submitted, the updated configuration will take effect immediately. The data that has been archived will not be impacted. For example, if the storage path of archived data is changed from /tds/ods/alarm1/ to /tds/ods/alarm2/, the new storage path will take effect immediately after the change is submitted. After about 1-2 minutes, the archived data will be stored in the alarm2 directory. The archived data that has been stored in the alarm1 directory will not be impacted.

Q: How to query the data that has been archived in the target storage?

A: The Data Archiving service enables archiving data from the data sources to the target storage. It is a set of archiving job configuration and management tools, but it does not provide the management of the target storage systems, nor the query ability of archived data. You need to use the corresponding management tools of the target storage systems for data query. For example:

  1. If the target storage is EnOS HDFS, you can use EnOS Data Sandbox product to query data stored in HDFS. For information about using the Data Sandbox, see Data Sandbox.

  2. If the target storage is Azure Blob, you can use the client tools provided by Azure platform to query the data stored in Blob Storage.

Q: When the data archiving job is restarted after running failure, will the job re-archive the data at the moment when the job fails?

A: For the following situations:

  1. For real-time data archiving, when the failed job is restarted, it will re-archive all the failed data in the last 3 days automatically. If the job failed for more than 3 days, it can process data in the latest 3 days only. Therefore, when the data archiving job failure triggers the alert notification through SMS or email, the alert receiver must take action in time to avoid data loss.

  2. For offline data archiving, when the failed job is restarted, it will re-archive all the data again.

FAQs for Data Synchronization and Batch Processing

Q: Does Data Synchronization service supports synchronizing both structured data and unstructured data?

A: Yes. Data Synschronization service supports synchronizing structured data and file stream (unstructured data).

Q: Do Data Synchronization and Batch Processing services support system variables?

A: Yes. Data Synchronization and Batch Processing services support triggering time and business date variables, time-related variables, and non-time-related variables to achieve dynamic parameter transfer. For detailed information, see Supported System Variables.

Q: Do Data Synchronization and Batch Processing services support resource isolation?

A: Yes. Currently, the resources used by the Data Synchronization and Batch Processing services are dynamically requested on demand. After data synchronization and batch processing jobs are completed, the resources can be released. The requested resources are completely isolated and do not affect each other.

Q: Does the Batch Processing service support distributed operation of multiple tasks?

A: Yes. When configuring the running mode of a batch processing task, you can specify the source of the distribution key to enable distributed operation of multiple tasks for enhancing running efficiency.

Q: Do Data Synchronization and Batch Processing services support alert configuration?

A: Yes. After configuring alert service for the Data Synchronization and Batch Processing services, the alert messages will be sent to the specified receivers through SMS or email upon running exception.

Q: Does the Batch Processing service support calling by external applications?

A: Yes. The Batch Processing service provides REST APIs for integration with external applications.

Q: Why can’t the tables in the MySQL database be displayed normally in Data Synchronization?

A: The time zone of MySQL database needs to be set as UTC, otherwise the tables in the MySQL database cannot be displayed normally.