Wiki source code of Using Kafka Module in Mendix

Last modified by Eva Torken on 2023/09/01 11:10

Show last authors
1 {{container}}
2 {{container layoutStyle="columns"}}
3 (((
4 {{warning}}
5 Note that the following only applies for our latest version of the eMagiz Kafka Module (1.3.0). If you want to migrate to this version please check out this [[migration path>>doc:Main.eMagiz Support.Migration Paths.migration-path-mendix-kafka-connector-to-130.WebHome||target="blank"]].
6 {{/warning}}
7
8 In this microlearning, we will focus on how you can utilize the Kafka Module from eMagiz as available via the portal to consume and produce data from topics managed within the eMagiz Kafka Cluster.
9
10 Should you have any questions, please get in touch with [[academy@emagiz.com>>mailto:academy@emagiz.com]].
11
12 == 1. Prerequisites ==
13
14 * Basic knowledge of the eMagiz platform
15 * Basic knowledge of the Mendix platform
16 * Mendix project in which you can test this functionality
17 * A Kafka cluster to which you can connect.
18 * Access to the eMagiz Event catalog to obtain the required Keystore & Truststore (or communication with someone that can provide you the relevant key- and truststore).
19 ** To learn more about the eMagiz Event Catalog, please refer to the microlearning, which can be found here [[eMagiz Event Catalog>>doc:Main.eMagiz Academy.Microlearnings.Crash Course.Crash Course Event Streaming.crashcourse-eventstreaming-catalog||target="blank"]].
20
21 == 2. Key concepts ==
22
23 This microlearning centers around using the Kafka module of eMagiz in Mendix.
24 By using, in this context, we mean: Being able to produce and consume data to and from topics that are managed within an external Kafka Cluster, such as the eMagiz Kafka Cluster.
25
26 By knowing how you can easily set up Mendix to consume and produce data from and to topics, you have the option to better transport large volumes of data between several systems (i.e., two Mendix applications).
27
28 * Producing data on a topic means that the external system, in this case, Mendix, writes data to a pre-defined topic where the data is stored temporarily to ensure that one or more other systems can consume the data.
29 * Consuming data from a topic means that the external system, in this case, Mendix, reads data from a pre-defined topic where the data is stored temporarily.
30
31 == 3. Using Kafka Module in Mendix ==
32
33 To use the Kafka Module in Mendix, you need to be able to do at least the following:
34
35 * Set up a connection to the external Kafka Cluster (i.e., the eMagiz Kafka Cluster) from Mendix
36 * Configure a Producer that can write (publish) data to a topic **or** configure a Consumer that can read (listen) data from a topic
37
38 When you have configured these steps, you have to think about how you want to transfer data from and your data model.
39 That part is excluded in this microlearning as that focuses solely on how you build microflows in Mendix.
40 It is good to notice that the Kafka Module comes with some good examples of microflows that you can use as a starting point.
41 These examples can be found in the \_USE_ME folder of the Mendix module.
42
43 === 3.1 Initial Configuration Kafka Module ===
44
45 The first step is to set up the connection to an external Kafka Cluster. Before we can configure anything, we first need to retrieve the correct Kafka Module from the Mendix marketplace.
46
47 ==== 3.1.2 eMagiz Kafka Module ====
48
49 The eMagiz Kafka Module is available in the Mendix Marketplace and is also made available via the portal (similar to the eMagiz Mendix Connector). If you have trouble finding the correct module, please contact productmanagement@emagiz.com.
50
51 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--portal-overview-kafka-module.png]]
52
53 ==== 3.1.2 Making the configuration page accessible ====
54
55 After you have imported the Kafka module within your project, you can take the next step.
56 Within the Kafka Module, there is a microflow called OpenAdministration. Ensure an admin can reach this microflow as this is the starting point for the rest of the configuration.
57
58 ==== 3.2 Set up a connection to the external Kafka Cluster ====
59
60 When you run your project, call the microflow. The microflow will lead you to the following page.
61
62 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--kafka-admin-overview.png]]
63
64 {{info}}
65 Please read through the steps described on the "General" page as they should give you all the relevant info when you start from scratch. To migrate from an existing eMagiz Kafka module to the latest one please check out this [[migration path>>doc:Main.eMagiz Support.Migration Paths.migration-path-mendix-kafka-connector-to-130.WebHome||target="blank"]].
66 {{/info}}
67
68 This page consists of three tabs. The first tab documents the steps you need to take. The second tab is for the server configuration and the producer config. The third tab is for a consumer config. We will focus our attention on setting up the connection between Mendix and the Kafka cluster in this segment. This is a prerequisite for the configuration of producers and consumers and a working solution.
69
70 To set up the general settings, first, navigate to the tab called Configuration Details. This will lead you to the following overview.
71
72 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--kafka-admin-overview-configuration.png]]
73
74 This overview holds all generic configuration elements needed to set up a connection to a Kafka cluster. Under the advanced tab, you will see the detailed configuration. You do not have to change any of these settings.
75
76 The settings you do need to change/fill in, however, are:
77
78 * bootstrap servers
79 * reference to Keystore and truststore, including passwords.
80
81 At first, you fill in the bootstrap server URL. If you have done so, you can continue on this page by filling in the SSL details. In this second part of the overview, we define the Truststore and Keystore needed to authenticate ourselves with the Kafka Cluster.
82
83 To retrieve the relevant details that you need to configure here, please ask your implementation contact performing the eMagiz implementation to grant you access to the catalog. For more information on the catalog, please check out this [[microlearning>>doc:Main.eMagiz Academy.Microlearnings.Crash Course.Crash Course Event Streaming.crashcourse-eventstreaming-catalog||target="blank"]].
84
85 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--kafka-admin-overview-ssl.png]]
86
87 If you have received the Keystore and Truststore, you can upload them and fill in the password.
88 Remember, it is a best practice to ensure that the private key password of the Keystore always matches the password of the Keystore itself.
89
90 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--kafka-admin-overview-ssl-filled-in.png]]
91
92 The moment you are satisfied with your configuration, press Save. This will show you a popup that the configuration has indeed been saved.
93
94 === 3.3 Configure a Producer ===
95
96 To configure a Producer, you navigate to the tab called Configuration Details. On the bottom, you have a section called Producer.
97
98 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--standard-producer.png]]
99
100 eMagiz will automatically add a producer to your server configuration filled in with all the correct detailed information. This is to make your life a little bit easier.
101
102 === 3.4 Configure a Consumer ===
103
104 To configure a Consumer, you navigate to the tab called Consumers and press the New button.
105
106 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--new-consumer.png]]
107
108 Fill in the relevant information on the General tab:
109
110 * Topic(s) from which the consumer will consume data
111 * Group ID for identification purposes while managing the cluster
112 * Reference to the on-receive microflow that will handle the incoming data
113
114 {{warning}}
115 When you cannot find the on-receive microflow, you should check the correct naming in your Mendix project
116 {{/warning}}
117
118 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--new-consumer-general-filled-in.png]]
119
120 Once again, the default configuration provided under the advanced tab works, and therefore there is no need to change it.
121
122 Suppose you have done all this press Save. The result of this is that you will see a new consumer on the Consumers tab.
123
124 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--consumer-added.png]]
125
126 ==== 3.4.1 Registering a Consumer ====
127
128 After you have configured the consumer, you will need to ensure that every time your application starts up, the consumer(s) are registered and are listening to whether new data comes in.
129
130 To do so, make sure that you retrieve the consumer(s) you have configured in the after startup microflow of your project and start them one by one. This can be easily achieved by using the microflow called AfterStartUpConsumersMicroflow, which you can find in the \_USE_ME folder of the Kafka Module as part of your own after startup microflow.
131
132 [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-using-kafka-module-mendix--consumer-registered.png]]
133
134 ==== 3.4.2 Multi Instance Configuration - Consumer Groups ====
135
136 In the consumer configuration as seen above, you will find a specific setting to identify the Group ID of this application. A Group ID identifies a consumer group. A consumer group can have more than 1 consumer that share the offset of the message consumed from that topic. In case where a application is duplicated for fail-over purposes, the group ID will ensure that each application will read message that belongs to the latest offset (so that message don't get read twice). In case consumers exist that need to read each message for a different usage or business process, you need to make sure that these Group IDs are different (each group ID has its own offset). For an explanation on how consumers groups work please see this [[fundamental>>doc:Main.eMagiz Academy.Fundamentals.fundamental-event-streaming-introduction||target="blank"]]
137
138 Congratulations, you have successfully configured Mendix to produce and consume data from and topics registered on an external Kafka cluster.
139
140 == 4. Key takeaways ==
141
142 * The starting point is importing the correct (Marketplace) Module within your project
143 * Make sure that your implementation contact has given you access to the catalog so you can retrieve the relevant information (bootstrap server, Keystore, and Truststore)
144 * The pre-configured settings the eMagiz Kafka Module provides you with don't need to be changed
145
146 == 5. Suggested Additional Readings ==
147
148 If you are interested in this topic and want more information on it, please see the following links:
149
150 * [[Kafka Explained>>https://www.cloudkarafka.com/blog/2016-11-30-part1-kafka-for-beginners-what-is-apache-kafka.html#::text=Apache%20Kafka%20is%20a%20publish||target="blank"]]
151 * [[Topic Configuration>>https://kafka.apache.org/documentation/#topicconfigs||target="blank"]]
152 * [[Kafka sizing>>https://medium.com/@tsureshkumar/sizing-kafka-capacity-needed-for-your-application-fdb6f24f67cd||target="blank"]]
153 * [[Choose Partition Numbers>>https://www.confluent.io/blog/how-choose-number-topics-partitions-kafka-cluster/||target="blank"]]
154 * [[Kafka Apache Introduction>>https://kafka.apache.org/intro||target="blank"]]
155
156 )))
157
158 (((
159 {{toc/}}
160 )))
161 {{/container}}
162 {{/container}}