Search: "support object"
Last modified by Rico Kieboom on 2022/04/11 14:18
Refine your search
Select a category and activate filters on the current results
Last modification date
Using eMagiz as a consumer
Located in
- Raw document content
Secondly, we need a support object that will set up the connection between the eMagiz flow and the eMagiz Event Streaming cluster that hosts the topic.
…If you have done all of this it should look like this: [[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-emagiz-as-consumer--create-phase-kafka-listener-basic-filled-in.png]] Now that we have filled in the basics for the support object we can fill in the details for our Kafka message-driven channel adapter.
…[[image:Main.Images.Microlearning.WebHome@intermediate-event-streaming-connectors-emagiz-as-consumer--create-phase-kafka-flow-messaging-ssl-resources.png]] Now that we have these resources added to the flow we can navigate back to the support object and open the SSL tab and fill it in accordingly.
Listening for data on a custom queue
Located in
- Raw document content
Note that when there is no bus-connection-plain support object available within the context of your flow, copy + paste this component from another flow in which it is available.
…Key takeaways == * A JMS message-driven channel adapter within your flow that listens for data on the 'custom' queue * The fully qualified name of the 'custom' queue on which to listen * When the support object named bus-connection-plain is missing, please add it via a copy+paste action == 5.
FTP Connectivity
Located in
- Raw document content
After you have done so, we first add the support object to our flow. In this case, we will use the Default FTP session factory.
…Now that we have configured the support object adding the FTP inbound channel adapter to the flow has become time.
…Furthermore, we need to link the support object we have just created and define a poller.
Aggregation
Located in
- Raw document content
To manage this storage, you have the option to select a Message Store support object if you have created one before, or leave it empty, which will default to an in-memory store that may result in data loss upon runtime shutdowns or restarts.
…Therefore, you will need to set up these support objects as well if you have not done so already: * Infinispan cache manager * Infinispan message store Once you have done so, you can set the "Simple cache" option in your message store to "no" and then set the "Persistent" option to "yes". For more information on configuring these support objects and understanding their settings, please refer to this [[State Persistence>>doc:Main.eMagiz Academy.Microlearnings.Intermediate Level.State Generation.intermediate-state-persistence||target="blank"]] microlearning. == 4.
Change Detection
Located in
- Raw document content
For such a purpose, eMagiz provides two support objects to set up this storage mechanism and one flow component to store the states into it: * Infinispan cache manager * Infinispan metadata store * Metadata outbound channel adapter Please refer to this [[State Persistence>>doc:Main.eMagiz Academy.Microlearnings.Intermediate Level.State Generation.intermediate-state-persistence||target="blank"]] microlearning to learn on how to configure these support objects and flow component. === 3.2 Stateful Configuration to Detect Changes === Once you have configured the storage mechanism, now you can start with setting up the component to retrieve the past states to detect changes in incoming messages by comparing them to past states.
Send emails
Located in
- Raw document content
To send a mail, we need at least the support object called "Java mail sender" and the outbound channel adapter called "Mail outbound channel adapter" from the list below.
…[[image:Main.Images.Microlearning.WebHome@advanced-mail-connectivity-using-mime-transform-xml-to-mime.png]] Note that before you can correctly configure your "XML to MIME transformer," you first need to define your "Java mail sender" support object. In here, you need to fill in, at the minimum, a reference to the host and the port of the mail server to whom you want to connect.
Data pipeline - Mendix to SFTP
Located in
- Raw document content
All of them are pre-filled for you * Next to that you have all the support objects needed to run the flow. * One down we have the job launch configuration
…It is up to the user what this point of time is. * In the bottom center we have some specific support objects that are relevant for this particular data pipeline implementation * Last but not least, on the right-hand bottom corner we have the functionality that automatically cleans up the job dashboard.
Archiving
Located in
- Raw document content
You can define this by dragging a format file name generator (support object) to the canvas. [[image:Main.Images.Microlearning.WebHome@novice-file-based-connectivity-archiving--file-name-generator.png]] After we have done this please add a file outbound channel adapter to the flow including an input channel.
…We start with a composite file filter (support object). Within this filter, we at least define how old a file must be before it can be deleted (in milliseconds).
SFTP Connectivity
Located in
- Raw document content
After you have done so, we first add the support object to our flow. In this case, we will use the Default SFTP caching session factory.
…Now that we have configured the support object adding the SFTP outbound channel adapter to the flow has become time.
…Furthermore, we need to link the support object we have just created and decide whether to auto-create the directory.
migration-path-job-dashboard-cleanup
Located in
- Raw document content
To make sure that your existing data pipeline will function in the same way, you should execute the following steps: * Add a support object called top level poller and configure it as follows [[image:Main.Images.Migrationpath.WebHome@migration-path-job-dashboard-cleanup--migration-path-job-dashboard-cleanup-top-level-poller-config.png]] * Add a channel called clean * Add a standard inbound channel adapter called clean.cron and configure it as follows (As you can see it cleans the job dashboard every day at five in the morning) [[image:Main.Images.Migrationpath.WebHome@migration-path-job-dashboard-cleanup--migration-path-job-dashboard-cleanup-clean-cron-config.png]] * Add a standard inbound channel adapter called startup.cron and configure it as follows (It cleans the job dashboard on startup) [[image:Main.Images.Migrationpath.WebHome@migration-path-job-dashboard-cleanup--migration-path-job-dashboard-cleanup-startup-cron-config.png]] * Add a JDBC outbound channel adapter to your flow * Use the clean channel as input * Link it to the h2 database that is in your flow * Enter the query that you can find below [[image:Main.Images.Migrationpath.WebHome@migration-path-aws-redshift-refresh--migration-path-job-dashboard-cleanup-result-part-one.png]] === 3.2 Query you need for cleanup === The following query is needed to cleanup all relevant parts of the job dashboard to ensure that only the last month's jobs are still visible.
« previous page
next page »
Page
1
2
3
4
5
6
RSS feed for search on ["support object"]