Questions no:16Verified Answer: = C. Create a task streaming to a streaming target endpoint (e.g., Kafka) and consume that endpoint in the following tasks as a source endpoint
Step by Step Comprehensive and Detailed Explanation with all References: =To deliver data from a source endpoint with minimal impact and distribute it to several target endpoints in Qlik Replicate, the best approach is:
C. Create a task streaming to a streaming target endpoint (e.g., Kafka) and consume that endpoint in the following tasks as a source endpoint: This method allows for efficient data distribution with minimal impact on the source system.By streaming data to a platform like Kafka, which is designed for high-throughput, scalable, and fault-tolerant storage, Qlik Replicate can then use this data stream as a source for multiple downstream tasks12.
The other options are less optimal because:
A. Create a LogStream task followed by multiple tasks using an endpoint that reads changes from the log stream staging folder: While this option involves a LogStream, it does not specify streaming to a target endpoint that can be consumed by multiple tasks, which is essential for minimal impact distribution.
B. Create a task streaming to a dedicated buffer database (e.g., Oracle or MySQL) and consume that database in the following tasks as a source endpoint: This option introduces additional complexity and potential performance overhead by using a buffer database.
D. Create multiple tasks using the same source endpoint: This could lead to increased load and impact on the source endpoint, which is contrary to the requirement of minimal impact.
For more detailed information on how to set up streaming tasks to target endpoints like Kafka and how to configure subsequent tasks to consume from these streaming endpoints, you can refer to the official Qlik documentation onAdding and managing target endpoints.