Wiki source code of Connecting your application to our event streaming broker
Last modified by Erik Bakker on 2025/06/11 15:28
Show last authors
author | version | line-number | content |
---|---|---|---|
1 | {{container}} | ||
2 | {{container layoutStyle="columns"}} | ||
3 | ((( | ||
4 | In this microlearning, we will explore how you can connect your custom build application to the event streaming broker that we provide. We will focus on the relevant configuration elements you must consider within your code. On top of that, we will provide some pseudo code illustrating a simple connection that should be used as an **illustration** and not as a **production-ready** solution. Note that this microlearning is **not** relevant for Mendix app developers. These developers should check out this [[microlearning>>doc:intermediate-event-streaming-connectors-mendix-as-producer||target="blank"]]. | ||
5 | |||
6 | Should you have any questions, please contact [[academy@emagiz.com>>mailto:academy@emagiz.com]]. | ||
7 | |||
8 | == 1. Prerequisites == | ||
9 | |||
10 | * Basic knowledge of the eMagiz platform | ||
11 | * Access to our Kafka cluster. | ||
12 | * Event streaming license activated | ||
13 | |||
14 | == 2. Key concepts == | ||
15 | |||
16 | This microlearning centers around connecting to our event streaming broker from a high-code application. | ||
17 | |||
18 | * By connecting, in this context, we mean Being able to produce or consume data on topics managed within our Kafka Cluster. | ||
19 | * You can rapidly integrate with it by knowing how you can easily connect to our eMagiz Kafka Cluster. | ||
20 | |||
21 | == 3. Connecting your application to our event streaming broker == | ||
22 | |||
23 | Connecting your application requires several configuration steps. We start by discussing the connection details. Subsequently, we will discuss message (de)serialization. We also discuss additional requirements for consuming messages. | ||
24 | |||
25 | === 3.1 Connection details === | ||
26 | |||
27 | |||
28 | With the introduction of our Kafka broker, we will introduce the tenant concept. The name of the tenant is built up in the following way: {{code}}emagiz-[customer]-[model]-[environment]{{/code}}. Below, you can find the default connection settings that you will need to use to connect. Ask your eMagiz contact for the relevant specific connection details applicable to your connection. | ||
29 | |||
30 | (% border="2" cellpadding="5" cellspacing="5" %) | ||
31 | |=Attribute|=Format|=Example|=Explanation | ||
32 | |Bootstrap servers|proxy-0.kafka.[tenant].kpn-dsh.com:9091, proxy-2.kafka.[tenant].kpn-dsh.com:9091, proxy-1.kafka.[tenant].kpn-dsh.com:9091|proxy-0.kafka.emagiz-mcrlng-test.poc.kpn-dsh.com:9091,proxy-1.kafka.emagiz-mcrlng-test.poc.kpn-dsh.com:9091,proxy-2.kafka.emagiz-mcrlng-test.poc.kpn-dsh.com:9091|((( | ||
33 | * Instead of one server identical for any connection, it has a variable part that is different for every customer and environment. | ||
34 | * Three servers must be specified instead of a single server. Depending on the programming language and library used, the data may need to be provided as a single String or an array. | ||
35 | ))) | ||
36 | |Security protocol|SSL|SSL|\\ | ||
37 | |SSL keystore certificate|PEM encoded certificate string|-BEGIN CERTIFICATE-…. |It used to be a JKS file. It will be provided to you upon request. | ||
38 | |SSL keystore key|PKCS#8 PEM encoded string. RSA encrypted|-BEGIN ENCRYPTED PRIVATE KEY-…. |If you cannot handle the encrypted file or prefer it unencrypted, you can easily unencrypt it using the CLI command {{code}}openssl pkcs8 -in [filename] -out [target-filename]{{/code}} | ||
39 | |SSL key passphrase|||Will be provided to you on request | ||
40 | |||
41 | Topic naming is of the utmost importance when connecting to our Event Streaming broker. Below, we outline the various naming conventions regarding topic naming. | ||
42 | |||
43 | |||
44 | (% border="2" cellpadding="5" cellspacing="5" %) | ||
45 | |=Type|=Format|=Example | ||
46 | |Topic name|{{code}}stream.emagiz---[tenant]-[topicname]{{/code}}|stream.emagiz~-~--emagiz-mcrlng-test.topic | ||
47 | |Produce to topic|{{code}}stream.emagiz---[tenant]-[topicname].[tenant]{{/code}}|stream.emagiz~-~--emagiz-mcrlng-test.topic.emagiz-mcrlng-test | ||
48 | |Consume from topic|{{code}}stream.emagiz---[tenant]-[topicname]..*{{/code}}|stream.emagiz~-~--[tenant]-[topicname]\..* | ||
49 | |||
50 | {{info}} | ||
51 | Note that the topic name needs a suffix, depending on whether you are producing or consuming to/from that topic. More details can be found below. | ||
52 | {{/info}} | ||
53 | |||
54 | ==== 3.1.1 Recommended consumer/producer settings ==== | ||
55 | |||
56 | {{info}} | ||
57 | As a general best practice, keep a connection open and use that to send or receive multiple messages instead of opening a new connection each time you wish to consume or produce a message. | ||
58 | {{/info}} | ||
59 | |||
60 | When many messages need to be produced to a topic, or a topic that receives many messages needs to be consumed, the settings used for the Kafka components can strongly affect the throughput speed. Based on our testing, we recommend using the following settings: | ||
61 | |||
62 | (% border="2" cellpadding="5" cellspacing="5" %) | ||
63 | |=Setting|=Type|=Value|=Explanation | ||
64 | |Batch size|Producer|200000|The number of messages the producer will attempt to send at once. This prevents every individual message from being sent on its own. | ||
65 | |Linger|Producer|300 ms|The time to wait until enough messages are available to send a single batch. | ||
66 | |Minimum fetch amount|Consumer|100000|The number of messages the consumer will attempt to receive at once. When fewer messages are available, those will be fetched after the wait timeout has passed. | ||
67 | |||
68 | === 3.2 Message (de)serialization === | ||
69 | |||
70 | Our event broker requires message serialization. This means that any message produced for the event broker needs to be serialized, or it may fail to be processed by other systems. Similarly, all consumers have to apply message deserialization. | ||
71 | |||
72 | eMagiz will provide a [[protobuf>>url:https://protobuf.com/docs/introduction||target="blank"]] envelope with the format. This file can be used with the protoc cli tool to compile it into a language of choice. You can find the envelope [[here>>attach:Main.Images.Microlearning.WebHome@envelope.proto]]. | ||
73 | |||
74 | ==== 3.2.1 Producing messages ==== | ||
75 | |||
76 | When producing messages, the Kafka message key should be built up according to the envelope specified in the attached {{code}}.proto{{/code}} file. | ||
77 | |||
78 | As an example, the message key can be built up as follows: | ||
79 | |||
80 | (% border="2" cellpadding="5" cellspacing="5" %) | ||
81 | |=Field|=Value|=Explanation | ||
82 | |key|*|Free text that references something unique. | ||
83 | |header.identifier.tenant|//tenant name//|Required field, needs to point to the {{code}}[customer]-[model]-[environment]{{/code}} belonging to the topic you are connecting to. | ||
84 | |header.identifier.client|*|A descriptive client name representing your application. Used for traceability and support. For example, your order-processing app could use ‘order-app’. | ||
85 | |header.retained|False|Not used at this point, can default to false. | ||
86 | |header.qos|1 (or RELIABLE)|Quality of Service identifier, either 0 (best effort) or 1 (reliable). We recommend defaulting to 1. | ||
87 | |||
88 | The data envelope that contains the actual data is significantly more straightforward to set up. Only the ‘payload’ attribute needs to be set with your payload. | ||
89 | |||
90 | //How to use the proto files// | ||
91 | |||
92 | * For Java applications, a [[library>>attach:Main.Images.Microlearning.WebHome@emagiz-components-kafka.jar]] is attached below. These contain the (de)serializers for both the Key and Data parts of messages: | ||
93 | * {{code}}com.emagiz.components.kafka.dsh.serialization.DataEnvelopeSerializer{{/code}} for data | ||
94 | * {{code}}com.emagiz.components.kafka.dsh.serialization.KeyEnvelopeSerializer{{/code}} for keys. Note that the KeyEnvelopeSerializer requires two config properties to be set: | ||
95 | ** {{code}}emagiz.dsh.kafka.tenant{{/code}} used to provision the ‘header.identifier.tenant’ field. | ||
96 | ** {{code}}emagiz.dsh.kafka.publisher{{/code}} used to provision the {{code}}header.identifier.client{{/code}} field to a value describing the producer. | ||
97 | * For example, to generate Python sources from the envelope (assuming the envelope is called {{code}}envelope.proto{{/code}} and is present in the working directory), you can execute the following: | ||
98 | ** {{code}}protoc --python_out=. envelope.proto{{/code}}, which will result in a {{code}}envelope_pb2.py{{/code}} file that you can import into your Python code. An example producer with Python is attached to this. | ||
99 | |||
100 | {{info}} | ||
101 | Note that this [[Python script>>attach:Main.Images.Microlearning.WebHome@emagiz-kafka-broker-python.zip]] illustrates how a single message can be produced in the supported format and should **not be used** for **production applications** that produce a constant stream of messages. | ||
102 | {{/info}} | ||
103 | |||
104 | ==== 3.2.2 Consuming messages ==== | ||
105 | |||
106 | Similar to producing, the consuming side must also be able to deserialize the messages on the topic. For this, a deserializer should be set. When using Java applications, the following deserializers can be used from the provided [[library>>attach:Main.Images.Microlearning.WebHome@emagiz-components-kafka.jar]]: | ||
107 | |||
108 | * {{code}}com.emagiz.components.kafka.dsh.serialization.DataEnvelopeDeserializer for data{{/code}} | ||
109 | * {{code}}com.emagiz.components.kafka.dsh.serialization.KeyEnvelopeDeserializer for keys{{/code}} | ||
110 | |||
111 | For other languages, refer to the section above regarding producing messages for general instructions on using the provided proto file to create files for deserializing. | ||
112 | |||
113 | === 3.3 Other requirements for consuming messages === | ||
114 | |||
115 | For your consumer to be authorized to access the topic, a naming practice for the consumer group ID will be enforced. To connect, the consumer group ID must start with {{code}}tenant.username.{{/code}}. The consumer group ID {{code}}emagiz-mcrlng-test.user.[free-text]_0{{/code}} can be considered as an example. | ||
116 | |||
117 | Furthermore, in contrast to how topics are currently created, topics may have multiple suffixes, depending on the data source. Therefore, we suggest consuming from a topic pattern rather than a fixed list of names. For example, when connecting to the available sample topic ‘consume’ that is shown below, we recommend connecting on the pattern {{code}}stream.emagiz---emagiz-mcrlng-test-consume.*{{/code}} instead of the fully qualified name {{code}}stream.emagiz---emagiz-mcrlng-test-consume.emagiz-mcrlng-test{{/code}}. If that is not possible for your application, the fully qualified topic name can be used for now. | ||
118 | |||
119 | == 4. Key takeaways == | ||
120 | |||
121 | * Ask your eMagiz contact for the relevant specific connection details applicable to your connection. | ||
122 | * The topic naming is paramount when connecting to our Event Streaming broker. | ||
123 | * As a general best practice, keep a connection open and use that to send or receive multiple messages instead of opening a new connection each time you wish to consume or produce a message. | ||
124 | * Our event broker requires message serialization. | ||
125 | ** This requires a protobuf envelope. | ||
126 | * For your consumer to be authorized to access the topic, a naming practice for the consumer group ID will be enforced. | ||
127 | |||
128 | == 5. Suggested Additional Readings == | ||
129 | |||
130 | If you are interested in this topic and want more information on it, please see the following links: | ||
131 | |||
132 | * [[Java library>>attach:Main.Images.Microlearning.WebHome@emagiz-components-kafka.jar]] | ||
133 | * [[Envelope example>>attach:Main.Images.Microlearning.WebHome@envelope.proto]] | ||
134 | * [[Python script>>attach:Main.Images.Microlearning.WebHome@emagiz-kafka-broker-python.zip]] | ||
135 | * [[Crash Courses (Menu)>>doc:Main.eMagiz Academy.Microlearnings.Crash Course.WebHome||target="blank"]] | ||
136 | ** [[Crash Course Platform (Navigation)>>doc:Main.eMagiz Academy.Microlearnings.Crash Course.Crash Course Platform.WebHome||target="blank"]] | ||
137 | ** [[Crash Course Messaging (Navigation)>>doc:Main.eMagiz Academy.Microlearnings.Crash Course.Crash Course Messaging.WebHome||target="blank"]] | ||
138 | ** [[Crash Course Event Streaming (Navigation)>>doc:Main.eMagiz Academy.Microlearnings.Crash Course.Crash Course Event Streaming.WebHome||target="blank"]] | ||
139 | * [[Intermediate Level (Menu)>>doc:Main.eMagiz Academy.Microlearnings.Intermediate Level.WebHome||target="blank"]] | ||
140 | ** [[Configuring Event Streaming (Navigation)>>doc:Main.eMagiz Academy.Microlearnings.Intermediate Level.Configuring Event Streaming.WebHome||target="blank"]] | ||
141 | * [[Connecting Your Application (Search Results)>>url:https://docs.emagiz.com/bin/view/Main/Search?text=%22connecting%20your%20application%22&f_type=DOCUMENT&f_space_facet=0%2FMain.&f_locale=en&f_locale=&f_locale=en&r=1||target="blank"]] | ||
142 | ))) | ||
143 | |||
144 | ((( | ||
145 | {{toc/}} | ||
146 | ))) | ||
147 | {{/container}} | ||
148 | {{/container}} |