├── README.adoc
├── apps
├── log-consumer
│ ├── .gitignore
│ ├── .mvn
│ │ └── wrapper
│ │ │ ├── maven-wrapper.jar
│ │ │ └── maven-wrapper.properties
│ ├── mvnw
│ ├── mvnw.cmd
│ ├── pom.xml
│ └── src
│ │ └── main
│ │ ├── java
│ │ └── com
│ │ │ └── redhat
│ │ │ └── workshop
│ │ │ └── amqstreams
│ │ │ └── logconsumer
│ │ │ └── LogConsumerApplication.java
│ │ └── resources
│ │ └── application.properties
└── timer-producer
│ ├── .gitignore
│ ├── .mvn
│ └── wrapper
│ │ ├── maven-wrapper.jar
│ │ └── maven-wrapper.properties
│ ├── mvnw
│ ├── mvnw.cmd
│ ├── pom.xml
│ └── src
│ └── main
│ ├── java
│ └── com
│ │ └── redhat
│ │ └── workshop
│ │ └── amqstreams
│ │ └── timerproducer
│ │ └── TimerProducerApplication.java
│ └── resources
│ └── application.properties
├── bin
├── log-consumer.jar
└── timer-producer.jar
├── configurations
├── applications
│ ├── log-consumer-secured.yaml
│ ├── log-consumer-target.yaml
│ ├── log-consumer-team-1.yaml
│ ├── log-consumer-team-2.yaml
│ ├── log-consumer.yaml
│ ├── timer-producer-secured.yaml
│ ├── timer-producer-team-1.yaml
│ └── timer-producer.yaml
├── clusters
│ ├── kafka-connect.yaml
│ ├── mirror-maker-single-namespace.yaml
│ ├── mirror-maker.yaml
│ ├── production-ready-5-nodes.yaml
│ ├── production-ready-external-routes.yaml
│ ├── production-ready-monitored.yaml
│ ├── production-ready-secured.yaml
│ ├── production-ready-target.yaml
│ ├── production-ready.yaml
│ ├── simple-cluster-affinity.yaml
│ └── simple-cluster.yaml
├── metrics
│ ├── grafana.yaml
│ └── prometheus.yaml
├── topics
│ ├── lines-10-target.yaml
│ ├── lines-10.yaml
│ └── lines.yaml
└── users
│ ├── secure-topic-reader.yaml
│ ├── secure-topic-writer.yaml
│ └── strimzi-admin.yaml
├── labs
├── 0-to-60.adoc
├── README.adoc
├── clients-within-outside-OCP.adoc
├── environment.adoc
├── kafka-connect.adoc
├── management-monitoring.adoc
├── mirror-maker-single-namespace.adoc
├── mirror-maker.adoc
├── production-ready-topologies.adoc
├── security.adoc
├── topic-management.adoc
├── understanding-the-application-ecosystem.adoc
├── watching-multiple-namespaces-short-1.1.adoc
└── watching-multiple-namespaces.adoc
├── rhsummit2019-full
├── README.adoc
└── environment.adoc
├── rhsummit2019
├── README.adoc
└── environment.adoc
└── slides
└── README.adoc
/README.adoc:
--------------------------------------------------------------------------------
1 | = Red Hat Integration: AMQ Streams Workshop
2 |
3 | === Introduction
4 |
5 | link:https://www.redhat.com/en/topics/integration/what-is-apache-kafka[Apache Kafka] has become the leading platform for building real-time data pipelines. Today, Kafka is heavily used for developing event-driven applications, where it lets services communicate with each other through events. Using Kubernetes for this type of workload requires adding specialized components such as Kubernetes Operators and connectors to bridge the rest of your systems and applications to the Kafka ecosystem.
6 |
7 | To respond to business demands quickly and efficiently, you need a way to integrate applications and data spread across your enterprise. link:https://www.redhat.com/en/technologies/jboss-middleware/amq[Red Hat AMQ] — based on open source communities like Apache ActiveMQ and Apache Kafka — is a flexible messaging platform that delivers information reliably, enabling real-time integration and connecting the Internet of Things (IoT).
8 |
9 | image::https://access.redhat.com/webassets/avalon/d/Red_Hat_AMQ-7.7-Evaluating_AMQ_Streams_on_OpenShift-en-US/images/320e68d6e4b4080e7469bea094ec8fbf/operators.png[Operators within the AMQ Streams architecture]
10 |
11 | AMQ streams, a link:https://www.redhat.com/en/products/integration[Red Hat Integration] component, makes Apache Kafka “OpenShift native” through the use of powerful operators that simplify the deployment, configuration, management, and use of Apache Kafka on Red Hat OpenShift.
12 |
13 | === Audience
14 |
15 | - Developers
16 | - Architects
17 | - Data Integrators
18 |
19 | === Duration
20 |
21 | This workshop introduces participants to AMQ through a presentation and hands-on lab format. This workshop is intended to be completed in a half-day (4hrs) session.
22 |
23 | === Agenda
24 |
25 | === Labs
26 |
27 | The main idea of the workshop is to present to the student a set of hands-on labs to test the features of AMQ Streams. The following labs should cover the main aspects of the product.
28 |
29 | - link:labs/0-to-60.adoc[Lab 01] - AMQ Streams from 0 to 60
30 |
31 | === Contributing
32 |
33 | We welcome all forms of contribution (content, issues/bugs, feedback).
34 |
35 | === Support and ownership
36 |
37 | If you have any questions or are in need of support, reach out to link:https://github.com/hguerrero[Hugo Guerrero]
38 |
39 |
--------------------------------------------------------------------------------
/apps/log-consumer/.gitignore:
--------------------------------------------------------------------------------
1 | /target/
2 | !.mvn/wrapper/maven-wrapper.jar
3 |
4 | ### STS ###
5 | .apt_generated
6 | .classpath
7 | .factorypath
8 | .project
9 | .settings
10 | .springBeans
11 | .sts4-cache
12 |
13 | ### IntelliJ IDEA ###
14 | .idea
15 | *.iws
16 | *.iml
17 | *.ipr
18 |
19 | ### NetBeans ###
20 | /nbproject/private/
21 | /nbbuild/
22 | /dist/
23 | /nbdist/
24 | /.nb-gradle/
25 | /build/
26 |
--------------------------------------------------------------------------------
/apps/log-consumer/.mvn/wrapper/maven-wrapper.jar:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RedHat-Middleware-Workshops/workshop-amq-streams/e2c0a0e8e8b1be5b3d57e73a490f5e5640b90486/apps/log-consumer/.mvn/wrapper/maven-wrapper.jar
--------------------------------------------------------------------------------
/apps/log-consumer/.mvn/wrapper/maven-wrapper.properties:
--------------------------------------------------------------------------------
1 | distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.5.4/apache-maven-3.5.4-bin.zip
2 |
--------------------------------------------------------------------------------
/apps/log-consumer/mvnw:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 | # ----------------------------------------------------------------------------
3 | # Licensed to the Apache Software Foundation (ASF) under one
4 | # or more contributor license agreements. See the NOTICE file
5 | # distributed with this work for additional information
6 | # regarding copyright ownership. The ASF licenses this file
7 | # to you under the Apache License, Version 2.0 (the
8 | # "License"); you may not use this file except in compliance
9 | # with the License. You may obtain a copy of the License at
10 | #
11 | # http://www.apache.org/licenses/LICENSE-2.0
12 | #
13 | # Unless required by applicable law or agreed to in writing,
14 | # software distributed under the License is distributed on an
15 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
16 | # KIND, either express or implied. See the License for the
17 | # specific language governing permissions and limitations
18 | # under the License.
19 | # ----------------------------------------------------------------------------
20 |
21 | # ----------------------------------------------------------------------------
22 | # Maven2 Start Up Batch script
23 | #
24 | # Required ENV vars:
25 | # ------------------
26 | # JAVA_HOME - location of a JDK home dir
27 | #
28 | # Optional ENV vars
29 | # -----------------
30 | # M2_HOME - location of maven2's installed home dir
31 | # MAVEN_OPTS - parameters passed to the Java VM when running Maven
32 | # e.g. to debug Maven itself, use
33 | # set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
34 | # MAVEN_SKIP_RC - flag to disable loading of mavenrc files
35 | # ----------------------------------------------------------------------------
36 |
37 | if [ -z "$MAVEN_SKIP_RC" ] ; then
38 |
39 | if [ -f /etc/mavenrc ] ; then
40 | . /etc/mavenrc
41 | fi
42 |
43 | if [ -f "$HOME/.mavenrc" ] ; then
44 | . "$HOME/.mavenrc"
45 | fi
46 |
47 | fi
48 |
49 | # OS specific support. $var _must_ be set to either true or false.
50 | cygwin=false;
51 | darwin=false;
52 | mingw=false
53 | case "`uname`" in
54 | CYGWIN*) cygwin=true ;;
55 | MINGW*) mingw=true;;
56 | Darwin*) darwin=true
57 | # Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home
58 | # See https://developer.apple.com/library/mac/qa/qa1170/_index.html
59 | if [ -z "$JAVA_HOME" ]; then
60 | if [ -x "/usr/libexec/java_home" ]; then
61 | export JAVA_HOME="`/usr/libexec/java_home`"
62 | else
63 | export JAVA_HOME="/Library/Java/Home"
64 | fi
65 | fi
66 | ;;
67 | esac
68 |
69 | if [ -z "$JAVA_HOME" ] ; then
70 | if [ -r /etc/gentoo-release ] ; then
71 | JAVA_HOME=`java-config --jre-home`
72 | fi
73 | fi
74 |
75 | if [ -z "$M2_HOME" ] ; then
76 | ## resolve links - $0 may be a link to maven's home
77 | PRG="$0"
78 |
79 | # need this for relative symlinks
80 | while [ -h "$PRG" ] ; do
81 | ls=`ls -ld "$PRG"`
82 | link=`expr "$ls" : '.*-> \(.*\)$'`
83 | if expr "$link" : '/.*' > /dev/null; then
84 | PRG="$link"
85 | else
86 | PRG="`dirname "$PRG"`/$link"
87 | fi
88 | done
89 |
90 | saveddir=`pwd`
91 |
92 | M2_HOME=`dirname "$PRG"`/..
93 |
94 | # make it fully qualified
95 | M2_HOME=`cd "$M2_HOME" && pwd`
96 |
97 | cd "$saveddir"
98 | # echo Using m2 at $M2_HOME
99 | fi
100 |
101 | # For Cygwin, ensure paths are in UNIX format before anything is touched
102 | if $cygwin ; then
103 | [ -n "$M2_HOME" ] &&
104 | M2_HOME=`cygpath --unix "$M2_HOME"`
105 | [ -n "$JAVA_HOME" ] &&
106 | JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
107 | [ -n "$CLASSPATH" ] &&
108 | CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
109 | fi
110 |
111 | # For Mingw, ensure paths are in UNIX format before anything is touched
112 | if $mingw ; then
113 | [ -n "$M2_HOME" ] &&
114 | M2_HOME="`(cd "$M2_HOME"; pwd)`"
115 | [ -n "$JAVA_HOME" ] &&
116 | JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`"
117 | # TODO classpath?
118 | fi
119 |
120 | if [ -z "$JAVA_HOME" ]; then
121 | javaExecutable="`which javac`"
122 | if [ -n "$javaExecutable" ] && ! [ "`expr \"$javaExecutable\" : '\([^ ]*\)'`" = "no" ]; then
123 | # readlink(1) is not available as standard on Solaris 10.
124 | readLink=`which readlink`
125 | if [ ! `expr "$readLink" : '\([^ ]*\)'` = "no" ]; then
126 | if $darwin ; then
127 | javaHome="`dirname \"$javaExecutable\"`"
128 | javaExecutable="`cd \"$javaHome\" && pwd -P`/javac"
129 | else
130 | javaExecutable="`readlink -f \"$javaExecutable\"`"
131 | fi
132 | javaHome="`dirname \"$javaExecutable\"`"
133 | javaHome=`expr "$javaHome" : '\(.*\)/bin'`
134 | JAVA_HOME="$javaHome"
135 | export JAVA_HOME
136 | fi
137 | fi
138 | fi
139 |
140 | if [ -z "$JAVACMD" ] ; then
141 | if [ -n "$JAVA_HOME" ] ; then
142 | if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
143 | # IBM's JDK on AIX uses strange locations for the executables
144 | JAVACMD="$JAVA_HOME/jre/sh/java"
145 | else
146 | JAVACMD="$JAVA_HOME/bin/java"
147 | fi
148 | else
149 | JAVACMD="`which java`"
150 | fi
151 | fi
152 |
153 | if [ ! -x "$JAVACMD" ] ; then
154 | echo "Error: JAVA_HOME is not defined correctly." >&2
155 | echo " We cannot execute $JAVACMD" >&2
156 | exit 1
157 | fi
158 |
159 | if [ -z "$JAVA_HOME" ] ; then
160 | echo "Warning: JAVA_HOME environment variable is not set."
161 | fi
162 |
163 | CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher
164 |
165 | # traverses directory structure from process work directory to filesystem root
166 | # first directory with .mvn subdirectory is considered project base directory
167 | find_maven_basedir() {
168 |
169 | if [ -z "$1" ]
170 | then
171 | echo "Path not specified to find_maven_basedir"
172 | return 1
173 | fi
174 |
175 | basedir="$1"
176 | wdir="$1"
177 | while [ "$wdir" != '/' ] ; do
178 | if [ -d "$wdir"/.mvn ] ; then
179 | basedir=$wdir
180 | break
181 | fi
182 | # workaround for JBEAP-8937 (on Solaris 10/Sparc)
183 | if [ -d "${wdir}" ]; then
184 | wdir=`cd "$wdir/.."; pwd`
185 | fi
186 | # end of workaround
187 | done
188 | echo "${basedir}"
189 | }
190 |
191 | # concatenates all lines of a file
192 | concat_lines() {
193 | if [ -f "$1" ]; then
194 | echo "$(tr -s '\n' ' ' < "$1")"
195 | fi
196 | }
197 |
198 | BASE_DIR=`find_maven_basedir "$(pwd)"`
199 | if [ -z "$BASE_DIR" ]; then
200 | exit 1;
201 | fi
202 |
203 | ##########################################################################################
204 | # Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
205 | # This allows using the maven wrapper in projects that prohibit checking in binary data.
206 | ##########################################################################################
207 | if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then
208 | if [ "$MVNW_VERBOSE" = true ]; then
209 | echo "Found .mvn/wrapper/maven-wrapper.jar"
210 | fi
211 | else
212 | if [ "$MVNW_VERBOSE" = true ]; then
213 | echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
214 | fi
215 | jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
216 | while IFS="=" read key value; do
217 | case "$key" in (wrapperUrl) jarUrl="$value"; break ;;
218 | esac
219 | done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties"
220 | if [ "$MVNW_VERBOSE" = true ]; then
221 | echo "Downloading from: $jarUrl"
222 | fi
223 | wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
224 |
225 | if command -v wget > /dev/null; then
226 | if [ "$MVNW_VERBOSE" = true ]; then
227 | echo "Found wget ... using wget"
228 | fi
229 | wget "$jarUrl" -O "$wrapperJarPath"
230 | elif command -v curl > /dev/null; then
231 | if [ "$MVNW_VERBOSE" = true ]; then
232 | echo "Found curl ... using curl"
233 | fi
234 | curl -o "$wrapperJarPath" "$jarUrl"
235 | else
236 | if [ "$MVNW_VERBOSE" = true ]; then
237 | echo "Falling back to using Java to download"
238 | fi
239 | javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
240 | if [ -e "$javaClass" ]; then
241 | if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
242 | if [ "$MVNW_VERBOSE" = true ]; then
243 | echo " - Compiling MavenWrapperDownloader.java ..."
244 | fi
245 | # Compiling the Java class
246 | ("$JAVA_HOME/bin/javac" "$javaClass")
247 | fi
248 | if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
249 | # Running the downloader
250 | if [ "$MVNW_VERBOSE" = true ]; then
251 | echo " - Running MavenWrapperDownloader.java ..."
252 | fi
253 | ("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR")
254 | fi
255 | fi
256 | fi
257 | fi
258 | ##########################################################################################
259 | # End of extension
260 | ##########################################################################################
261 |
262 | export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"}
263 | if [ "$MVNW_VERBOSE" = true ]; then
264 | echo $MAVEN_PROJECTBASEDIR
265 | fi
266 | MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
267 |
268 | # For Cygwin, switch paths to Windows format before running java
269 | if $cygwin; then
270 | [ -n "$M2_HOME" ] &&
271 | M2_HOME=`cygpath --path --windows "$M2_HOME"`
272 | [ -n "$JAVA_HOME" ] &&
273 | JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
274 | [ -n "$CLASSPATH" ] &&
275 | CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
276 | [ -n "$MAVEN_PROJECTBASEDIR" ] &&
277 | MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"`
278 | fi
279 |
280 | WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
281 |
282 | exec "$JAVACMD" \
283 | $MAVEN_OPTS \
284 | -classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \
285 | "-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \
286 | ${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@"
287 |
--------------------------------------------------------------------------------
/apps/log-consumer/mvnw.cmd:
--------------------------------------------------------------------------------
1 | @REM ----------------------------------------------------------------------------
2 | @REM Licensed to the Apache Software Foundation (ASF) under one
3 | @REM or more contributor license agreements. See the NOTICE file
4 | @REM distributed with this work for additional information
5 | @REM regarding copyright ownership. The ASF licenses this file
6 | @REM to you under the Apache License, Version 2.0 (the
7 | @REM "License"); you may not use this file except in compliance
8 | @REM with the License. You may obtain a copy of the License at
9 | @REM
10 | @REM http://www.apache.org/licenses/LICENSE-2.0
11 | @REM
12 | @REM Unless required by applicable law or agreed to in writing,
13 | @REM software distributed under the License is distributed on an
14 | @REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 | @REM KIND, either express or implied. See the License for the
16 | @REM specific language governing permissions and limitations
17 | @REM under the License.
18 | @REM ----------------------------------------------------------------------------
19 |
20 | @REM ----------------------------------------------------------------------------
21 | @REM Maven2 Start Up Batch script
22 | @REM
23 | @REM Required ENV vars:
24 | @REM JAVA_HOME - location of a JDK home dir
25 | @REM
26 | @REM Optional ENV vars
27 | @REM M2_HOME - location of maven2's installed home dir
28 | @REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
29 | @REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a key stroke before ending
30 | @REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
31 | @REM e.g. to debug Maven itself, use
32 | @REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
33 | @REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
34 | @REM ----------------------------------------------------------------------------
35 |
36 | @REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
37 | @echo off
38 | @REM set title of command window
39 | title %0
40 | @REM enable echoing my setting MAVEN_BATCH_ECHO to 'on'
41 | @if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
42 |
43 | @REM set %HOME% to equivalent of $HOME
44 | if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
45 |
46 | @REM Execute a user defined script before this one
47 | if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
48 | @REM check for pre script, once with legacy .bat ending and once with .cmd ending
49 | if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
50 | if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
51 | :skipRcPre
52 |
53 | @setlocal
54 |
55 | set ERROR_CODE=0
56 |
57 | @REM To isolate internal variables from possible post scripts, we use another setlocal
58 | @setlocal
59 |
60 | @REM ==== START VALIDATION ====
61 | if not "%JAVA_HOME%" == "" goto OkJHome
62 |
63 | echo.
64 | echo Error: JAVA_HOME not found in your environment. >&2
65 | echo Please set the JAVA_HOME variable in your environment to match the >&2
66 | echo location of your Java installation. >&2
67 | echo.
68 | goto error
69 |
70 | :OkJHome
71 | if exist "%JAVA_HOME%\bin\java.exe" goto init
72 |
73 | echo.
74 | echo Error: JAVA_HOME is set to an invalid directory. >&2
75 | echo JAVA_HOME = "%JAVA_HOME%" >&2
76 | echo Please set the JAVA_HOME variable in your environment to match the >&2
77 | echo location of your Java installation. >&2
78 | echo.
79 | goto error
80 |
81 | @REM ==== END VALIDATION ====
82 |
83 | :init
84 |
85 | @REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
86 | @REM Fallback to current working directory if not found.
87 |
88 | set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
89 | IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
90 |
91 | set EXEC_DIR=%CD%
92 | set WDIR=%EXEC_DIR%
93 | :findBaseDir
94 | IF EXIST "%WDIR%"\.mvn goto baseDirFound
95 | cd ..
96 | IF "%WDIR%"=="%CD%" goto baseDirNotFound
97 | set WDIR=%CD%
98 | goto findBaseDir
99 |
100 | :baseDirFound
101 | set MAVEN_PROJECTBASEDIR=%WDIR%
102 | cd "%EXEC_DIR%"
103 | goto endDetectBaseDir
104 |
105 | :baseDirNotFound
106 | set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
107 | cd "%EXEC_DIR%"
108 |
109 | :endDetectBaseDir
110 |
111 | IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
112 |
113 | @setlocal EnableExtensions EnableDelayedExpansion
114 | for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
115 | @endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
116 |
117 | :endReadAdditionalConfig
118 |
119 | SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
120 | set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
121 | set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
122 |
123 | set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
124 | FOR /F "tokens=1,2 delims==" %%A IN (%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties) DO (
125 | IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
126 | )
127 |
128 | @REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
129 | @REM This allows using the maven wrapper in projects that prohibit checking in binary data.
130 | if exist %WRAPPER_JAR% (
131 | echo Found %WRAPPER_JAR%
132 | ) else (
133 | echo Couldn't find %WRAPPER_JAR%, downloading it ...
134 | echo Downloading from: %DOWNLOAD_URL%
135 | powershell -Command "(New-Object Net.WebClient).DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"
136 | echo Finished downloading %WRAPPER_JAR%
137 | )
138 | @REM End of extension
139 |
140 | %MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
141 | if ERRORLEVEL 1 goto error
142 | goto end
143 |
144 | :error
145 | set ERROR_CODE=1
146 |
147 | :end
148 | @endlocal & set ERROR_CODE=%ERROR_CODE%
149 |
150 | if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
151 | @REM check for post script, once with legacy .bat ending and once with .cmd ending
152 | if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
153 | if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
154 | :skipRcPost
155 |
156 | @REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
157 | if "%MAVEN_BATCH_PAUSE%" == "on" pause
158 |
159 | if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
160 |
161 | exit /B %ERROR_CODE%
162 |
--------------------------------------------------------------------------------
/apps/log-consumer/pom.xml:
--------------------------------------------------------------------------------
1 |
2 |
4 | 4.0.0
5 |
6 | org.springframework.boot
7 | spring-boot-starter-parent
8 | 2.1.2.RELEASE
9 |
10 |
11 | com.redhat.workshop.amqstreams
12 | log-consumer
13 | 0.0.1-SNAPSHOT
14 | rest-consumer
15 | Demo project for Spring Boot
16 |
17 |
18 | UTF-8
19 | UTF-8
20 | 1.8
21 | 2.23.0
22 | kubernetes
23 |
24 |
25 |
26 |
27 |
28 | org.springframework.boot
29 | spring-boot-starter-web
30 |
31 |
32 | org.springframework.boot
33 | spring-boot-starter-tomcat
34 |
35 |
36 |
37 |
38 |
39 | org.springframework.boot
40 | spring-boot-starter-jetty
41 |
42 |
43 |
44 | org.apache.camel
45 | camel-spring-boot-starter
46 | ${camel.version}
47 |
48 |
49 |
50 | org.apache.camel
51 | camel-servlet-starter
52 | ${camel.version}
53 |
54 |
55 |
56 | org.apache.camel
57 | camel-kafka
58 | ${camel.version}
59 |
60 |
61 |
62 | org.apache.camel
63 | camel-kafka-starter
64 | ${camel.version}
65 |
66 |
67 |
68 | org.springframework.boot
69 | spring-boot-starter-test
70 | test
71 |
72 |
73 |
74 |
75 |
76 |
77 | org.springframework.boot
78 | spring-boot-maven-plugin
79 |
80 |
81 | io.fabric8
82 | fabric8-maven-plugin
83 |
84 |
85 |
86 | resource
87 | build
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
--------------------------------------------------------------------------------
/apps/log-consumer/src/main/java/com/redhat/workshop/amqstreams/logconsumer/LogConsumerApplication.java:
--------------------------------------------------------------------------------
1 | package com.redhat.workshop.amqstreams.logconsumer;
2 |
3 | import org.apache.camel.Message;
4 | import org.apache.camel.builder.RouteBuilder;
5 | import org.apache.camel.model.rest.RestBindingMode;
6 | import org.springframework.boot.SpringApplication;
7 | import org.springframework.boot.autoconfigure.SpringBootApplication;
8 | import org.springframework.context.annotation.Bean;
9 |
10 | import java.util.List;
11 | import java.util.concurrent.CopyOnWriteArrayList;
12 |
13 | @SpringBootApplication
14 | public class LogConsumerApplication {
15 |
16 |
17 | private List lines = new CopyOnWriteArrayList<>();
18 |
19 |
20 | public static void main(String[] args) {
21 | SpringApplication.run(LogConsumerApplication.class, args);
22 | }
23 |
24 | @Bean
25 | public RouteBuilder routeBuilder() {
26 | return new RouteBuilder() {
27 | @Override
28 | public void configure() throws Exception {
29 |
30 | routeBuilder()
31 | .restConfiguration("servlet")
32 | .bindingMode(RestBindingMode.auto);
33 |
34 |
35 | from("kafka:{{input.topic}}")
36 | .log("Received from '${in.headers[kafka.TOPIC]}': ${in.body}")
37 | .process(exchange -> {
38 | Message in = exchange.getIn();
39 | lines.add(in.getBody(String.class));
40 | });
41 |
42 | rest("/lines")
43 | .get().route().setBody(constant(lines)).endRest();
44 |
45 | }
46 | };
47 | }
48 |
49 | }
50 |
51 |
--------------------------------------------------------------------------------
/apps/log-consumer/src/main/resources/application.properties:
--------------------------------------------------------------------------------
1 | input.topic=lines
2 | camel.component.kafka.configuration.brokers=localhost:9092
--------------------------------------------------------------------------------
/apps/timer-producer/.gitignore:
--------------------------------------------------------------------------------
1 | /target/
2 | !.mvn/wrapper/maven-wrapper.jar
3 |
4 | ### STS ###
5 | .apt_generated
6 | .classpath
7 | .factorypath
8 | .project
9 | .settings
10 | .springBeans
11 | .sts4-cache
12 |
13 | ### IntelliJ IDEA ###
14 | .idea
15 | *.iws
16 | *.iml
17 | *.ipr
18 |
19 | ### NetBeans ###
20 | /nbproject/private/
21 | /nbbuild/
22 | /dist/
23 | /nbdist/
24 | /.nb-gradle/
25 | /build/
26 |
--------------------------------------------------------------------------------
/apps/timer-producer/.mvn/wrapper/maven-wrapper.jar:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RedHat-Middleware-Workshops/workshop-amq-streams/e2c0a0e8e8b1be5b3d57e73a490f5e5640b90486/apps/timer-producer/.mvn/wrapper/maven-wrapper.jar
--------------------------------------------------------------------------------
/apps/timer-producer/.mvn/wrapper/maven-wrapper.properties:
--------------------------------------------------------------------------------
1 | distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.5.4/apache-maven-3.5.4-bin.zip
2 |
--------------------------------------------------------------------------------
/apps/timer-producer/mvnw:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 | # ----------------------------------------------------------------------------
3 | # Licensed to the Apache Software Foundation (ASF) under one
4 | # or more contributor license agreements. See the NOTICE file
5 | # distributed with this work for additional information
6 | # regarding copyright ownership. The ASF licenses this file
7 | # to you under the Apache License, Version 2.0 (the
8 | # "License"); you may not use this file except in compliance
9 | # with the License. You may obtain a copy of the License at
10 | #
11 | # http://www.apache.org/licenses/LICENSE-2.0
12 | #
13 | # Unless required by applicable law or agreed to in writing,
14 | # software distributed under the License is distributed on an
15 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
16 | # KIND, either express or implied. See the License for the
17 | # specific language governing permissions and limitations
18 | # under the License.
19 | # ----------------------------------------------------------------------------
20 |
21 | # ----------------------------------------------------------------------------
22 | # Maven2 Start Up Batch script
23 | #
24 | # Required ENV vars:
25 | # ------------------
26 | # JAVA_HOME - location of a JDK home dir
27 | #
28 | # Optional ENV vars
29 | # -----------------
30 | # M2_HOME - location of maven2's installed home dir
31 | # MAVEN_OPTS - parameters passed to the Java VM when running Maven
32 | # e.g. to debug Maven itself, use
33 | # set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
34 | # MAVEN_SKIP_RC - flag to disable loading of mavenrc files
35 | # ----------------------------------------------------------------------------
36 |
37 | if [ -z "$MAVEN_SKIP_RC" ] ; then
38 |
39 | if [ -f /etc/mavenrc ] ; then
40 | . /etc/mavenrc
41 | fi
42 |
43 | if [ -f "$HOME/.mavenrc" ] ; then
44 | . "$HOME/.mavenrc"
45 | fi
46 |
47 | fi
48 |
49 | # OS specific support. $var _must_ be set to either true or false.
50 | cygwin=false;
51 | darwin=false;
52 | mingw=false
53 | case "`uname`" in
54 | CYGWIN*) cygwin=true ;;
55 | MINGW*) mingw=true;;
56 | Darwin*) darwin=true
57 | # Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home
58 | # See https://developer.apple.com/library/mac/qa/qa1170/_index.html
59 | if [ -z "$JAVA_HOME" ]; then
60 | if [ -x "/usr/libexec/java_home" ]; then
61 | export JAVA_HOME="`/usr/libexec/java_home`"
62 | else
63 | export JAVA_HOME="/Library/Java/Home"
64 | fi
65 | fi
66 | ;;
67 | esac
68 |
69 | if [ -z "$JAVA_HOME" ] ; then
70 | if [ -r /etc/gentoo-release ] ; then
71 | JAVA_HOME=`java-config --jre-home`
72 | fi
73 | fi
74 |
75 | if [ -z "$M2_HOME" ] ; then
76 | ## resolve links - $0 may be a link to maven's home
77 | PRG="$0"
78 |
79 | # need this for relative symlinks
80 | while [ -h "$PRG" ] ; do
81 | ls=`ls -ld "$PRG"`
82 | link=`expr "$ls" : '.*-> \(.*\)$'`
83 | if expr "$link" : '/.*' > /dev/null; then
84 | PRG="$link"
85 | else
86 | PRG="`dirname "$PRG"`/$link"
87 | fi
88 | done
89 |
90 | saveddir=`pwd`
91 |
92 | M2_HOME=`dirname "$PRG"`/..
93 |
94 | # make it fully qualified
95 | M2_HOME=`cd "$M2_HOME" && pwd`
96 |
97 | cd "$saveddir"
98 | # echo Using m2 at $M2_HOME
99 | fi
100 |
101 | # For Cygwin, ensure paths are in UNIX format before anything is touched
102 | if $cygwin ; then
103 | [ -n "$M2_HOME" ] &&
104 | M2_HOME=`cygpath --unix "$M2_HOME"`
105 | [ -n "$JAVA_HOME" ] &&
106 | JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
107 | [ -n "$CLASSPATH" ] &&
108 | CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
109 | fi
110 |
111 | # For Mingw, ensure paths are in UNIX format before anything is touched
112 | if $mingw ; then
113 | [ -n "$M2_HOME" ] &&
114 | M2_HOME="`(cd "$M2_HOME"; pwd)`"
115 | [ -n "$JAVA_HOME" ] &&
116 | JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`"
117 | # TODO classpath?
118 | fi
119 |
120 | if [ -z "$JAVA_HOME" ]; then
121 | javaExecutable="`which javac`"
122 | if [ -n "$javaExecutable" ] && ! [ "`expr \"$javaExecutable\" : '\([^ ]*\)'`" = "no" ]; then
123 | # readlink(1) is not available as standard on Solaris 10.
124 | readLink=`which readlink`
125 | if [ ! `expr "$readLink" : '\([^ ]*\)'` = "no" ]; then
126 | if $darwin ; then
127 | javaHome="`dirname \"$javaExecutable\"`"
128 | javaExecutable="`cd \"$javaHome\" && pwd -P`/javac"
129 | else
130 | javaExecutable="`readlink -f \"$javaExecutable\"`"
131 | fi
132 | javaHome="`dirname \"$javaExecutable\"`"
133 | javaHome=`expr "$javaHome" : '\(.*\)/bin'`
134 | JAVA_HOME="$javaHome"
135 | export JAVA_HOME
136 | fi
137 | fi
138 | fi
139 |
140 | if [ -z "$JAVACMD" ] ; then
141 | if [ -n "$JAVA_HOME" ] ; then
142 | if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
143 | # IBM's JDK on AIX uses strange locations for the executables
144 | JAVACMD="$JAVA_HOME/jre/sh/java"
145 | else
146 | JAVACMD="$JAVA_HOME/bin/java"
147 | fi
148 | else
149 | JAVACMD="`which java`"
150 | fi
151 | fi
152 |
153 | if [ ! -x "$JAVACMD" ] ; then
154 | echo "Error: JAVA_HOME is not defined correctly." >&2
155 | echo " We cannot execute $JAVACMD" >&2
156 | exit 1
157 | fi
158 |
159 | if [ -z "$JAVA_HOME" ] ; then
160 | echo "Warning: JAVA_HOME environment variable is not set."
161 | fi
162 |
163 | CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher
164 |
165 | # traverses directory structure from process work directory to filesystem root
166 | # first directory with .mvn subdirectory is considered project base directory
167 | find_maven_basedir() {
168 |
169 | if [ -z "$1" ]
170 | then
171 | echo "Path not specified to find_maven_basedir"
172 | return 1
173 | fi
174 |
175 | basedir="$1"
176 | wdir="$1"
177 | while [ "$wdir" != '/' ] ; do
178 | if [ -d "$wdir"/.mvn ] ; then
179 | basedir=$wdir
180 | break
181 | fi
182 | # workaround for JBEAP-8937 (on Solaris 10/Sparc)
183 | if [ -d "${wdir}" ]; then
184 | wdir=`cd "$wdir/.."; pwd`
185 | fi
186 | # end of workaround
187 | done
188 | echo "${basedir}"
189 | }
190 |
191 | # concatenates all lines of a file
192 | concat_lines() {
193 | if [ -f "$1" ]; then
194 | echo "$(tr -s '\n' ' ' < "$1")"
195 | fi
196 | }
197 |
198 | BASE_DIR=`find_maven_basedir "$(pwd)"`
199 | if [ -z "$BASE_DIR" ]; then
200 | exit 1;
201 | fi
202 |
203 | ##########################################################################################
204 | # Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
205 | # This allows using the maven wrapper in projects that prohibit checking in binary data.
206 | ##########################################################################################
207 | if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then
208 | if [ "$MVNW_VERBOSE" = true ]; then
209 | echo "Found .mvn/wrapper/maven-wrapper.jar"
210 | fi
211 | else
212 | if [ "$MVNW_VERBOSE" = true ]; then
213 | echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..."
214 | fi
215 | jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
216 | while IFS="=" read key value; do
217 | case "$key" in (wrapperUrl) jarUrl="$value"; break ;;
218 | esac
219 | done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties"
220 | if [ "$MVNW_VERBOSE" = true ]; then
221 | echo "Downloading from: $jarUrl"
222 | fi
223 | wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar"
224 |
225 | if command -v wget > /dev/null; then
226 | if [ "$MVNW_VERBOSE" = true ]; then
227 | echo "Found wget ... using wget"
228 | fi
229 | wget "$jarUrl" -O "$wrapperJarPath"
230 | elif command -v curl > /dev/null; then
231 | if [ "$MVNW_VERBOSE" = true ]; then
232 | echo "Found curl ... using curl"
233 | fi
234 | curl -o "$wrapperJarPath" "$jarUrl"
235 | else
236 | if [ "$MVNW_VERBOSE" = true ]; then
237 | echo "Falling back to using Java to download"
238 | fi
239 | javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java"
240 | if [ -e "$javaClass" ]; then
241 | if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
242 | if [ "$MVNW_VERBOSE" = true ]; then
243 | echo " - Compiling MavenWrapperDownloader.java ..."
244 | fi
245 | # Compiling the Java class
246 | ("$JAVA_HOME/bin/javac" "$javaClass")
247 | fi
248 | if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then
249 | # Running the downloader
250 | if [ "$MVNW_VERBOSE" = true ]; then
251 | echo " - Running MavenWrapperDownloader.java ..."
252 | fi
253 | ("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR")
254 | fi
255 | fi
256 | fi
257 | fi
258 | ##########################################################################################
259 | # End of extension
260 | ##########################################################################################
261 |
262 | export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"}
263 | if [ "$MVNW_VERBOSE" = true ]; then
264 | echo $MAVEN_PROJECTBASEDIR
265 | fi
266 | MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
267 |
268 | # For Cygwin, switch paths to Windows format before running java
269 | if $cygwin; then
270 | [ -n "$M2_HOME" ] &&
271 | M2_HOME=`cygpath --path --windows "$M2_HOME"`
272 | [ -n "$JAVA_HOME" ] &&
273 | JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"`
274 | [ -n "$CLASSPATH" ] &&
275 | CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
276 | [ -n "$MAVEN_PROJECTBASEDIR" ] &&
277 | MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"`
278 | fi
279 |
280 | WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
281 |
282 | exec "$JAVACMD" \
283 | $MAVEN_OPTS \
284 | -classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \
285 | "-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \
286 | ${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@"
287 |
--------------------------------------------------------------------------------
/apps/timer-producer/mvnw.cmd:
--------------------------------------------------------------------------------
1 | @REM ----------------------------------------------------------------------------
2 | @REM Licensed to the Apache Software Foundation (ASF) under one
3 | @REM or more contributor license agreements. See the NOTICE file
4 | @REM distributed with this work for additional information
5 | @REM regarding copyright ownership. The ASF licenses this file
6 | @REM to you under the Apache License, Version 2.0 (the
7 | @REM "License"); you may not use this file except in compliance
8 | @REM with the License. You may obtain a copy of the License at
9 | @REM
10 | @REM http://www.apache.org/licenses/LICENSE-2.0
11 | @REM
12 | @REM Unless required by applicable law or agreed to in writing,
13 | @REM software distributed under the License is distributed on an
14 | @REM "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 | @REM KIND, either express or implied. See the License for the
16 | @REM specific language governing permissions and limitations
17 | @REM under the License.
18 | @REM ----------------------------------------------------------------------------
19 |
20 | @REM ----------------------------------------------------------------------------
21 | @REM Maven2 Start Up Batch script
22 | @REM
23 | @REM Required ENV vars:
24 | @REM JAVA_HOME - location of a JDK home dir
25 | @REM
26 | @REM Optional ENV vars
27 | @REM M2_HOME - location of maven2's installed home dir
28 | @REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands
29 | @REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a key stroke before ending
30 | @REM MAVEN_OPTS - parameters passed to the Java VM when running Maven
31 | @REM e.g. to debug Maven itself, use
32 | @REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000
33 | @REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files
34 | @REM ----------------------------------------------------------------------------
35 |
36 | @REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'
37 | @echo off
38 | @REM set title of command window
39 | title %0
40 | @REM enable echoing my setting MAVEN_BATCH_ECHO to 'on'
41 | @if "%MAVEN_BATCH_ECHO%" == "on" echo %MAVEN_BATCH_ECHO%
42 |
43 | @REM set %HOME% to equivalent of $HOME
44 | if "%HOME%" == "" (set "HOME=%HOMEDRIVE%%HOMEPATH%")
45 |
46 | @REM Execute a user defined script before this one
47 | if not "%MAVEN_SKIP_RC%" == "" goto skipRcPre
48 | @REM check for pre script, once with legacy .bat ending and once with .cmd ending
49 | if exist "%HOME%\mavenrc_pre.bat" call "%HOME%\mavenrc_pre.bat"
50 | if exist "%HOME%\mavenrc_pre.cmd" call "%HOME%\mavenrc_pre.cmd"
51 | :skipRcPre
52 |
53 | @setlocal
54 |
55 | set ERROR_CODE=0
56 |
57 | @REM To isolate internal variables from possible post scripts, we use another setlocal
58 | @setlocal
59 |
60 | @REM ==== START VALIDATION ====
61 | if not "%JAVA_HOME%" == "" goto OkJHome
62 |
63 | echo.
64 | echo Error: JAVA_HOME not found in your environment. >&2
65 | echo Please set the JAVA_HOME variable in your environment to match the >&2
66 | echo location of your Java installation. >&2
67 | echo.
68 | goto error
69 |
70 | :OkJHome
71 | if exist "%JAVA_HOME%\bin\java.exe" goto init
72 |
73 | echo.
74 | echo Error: JAVA_HOME is set to an invalid directory. >&2
75 | echo JAVA_HOME = "%JAVA_HOME%" >&2
76 | echo Please set the JAVA_HOME variable in your environment to match the >&2
77 | echo location of your Java installation. >&2
78 | echo.
79 | goto error
80 |
81 | @REM ==== END VALIDATION ====
82 |
83 | :init
84 |
85 | @REM Find the project base dir, i.e. the directory that contains the folder ".mvn".
86 | @REM Fallback to current working directory if not found.
87 |
88 | set MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%
89 | IF NOT "%MAVEN_PROJECTBASEDIR%"=="" goto endDetectBaseDir
90 |
91 | set EXEC_DIR=%CD%
92 | set WDIR=%EXEC_DIR%
93 | :findBaseDir
94 | IF EXIST "%WDIR%"\.mvn goto baseDirFound
95 | cd ..
96 | IF "%WDIR%"=="%CD%" goto baseDirNotFound
97 | set WDIR=%CD%
98 | goto findBaseDir
99 |
100 | :baseDirFound
101 | set MAVEN_PROJECTBASEDIR=%WDIR%
102 | cd "%EXEC_DIR%"
103 | goto endDetectBaseDir
104 |
105 | :baseDirNotFound
106 | set MAVEN_PROJECTBASEDIR=%EXEC_DIR%
107 | cd "%EXEC_DIR%"
108 |
109 | :endDetectBaseDir
110 |
111 | IF NOT EXIST "%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config" goto endReadAdditionalConfig
112 |
113 | @setlocal EnableExtensions EnableDelayedExpansion
114 | for /F "usebackq delims=" %%a in ("%MAVEN_PROJECTBASEDIR%\.mvn\jvm.config") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a
115 | @endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%
116 |
117 | :endReadAdditionalConfig
118 |
119 | SET MAVEN_JAVA_EXE="%JAVA_HOME%\bin\java.exe"
120 | set WRAPPER_JAR="%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.jar"
121 | set WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain
122 |
123 | set DOWNLOAD_URL="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.4.2/maven-wrapper-0.4.2.jar"
124 | FOR /F "tokens=1,2 delims==" %%A IN (%MAVEN_PROJECTBASEDIR%\.mvn\wrapper\maven-wrapper.properties) DO (
125 | IF "%%A"=="wrapperUrl" SET DOWNLOAD_URL=%%B
126 | )
127 |
128 | @REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central
129 | @REM This allows using the maven wrapper in projects that prohibit checking in binary data.
130 | if exist %WRAPPER_JAR% (
131 | echo Found %WRAPPER_JAR%
132 | ) else (
133 | echo Couldn't find %WRAPPER_JAR%, downloading it ...
134 | echo Downloading from: %DOWNLOAD_URL%
135 | powershell -Command "(New-Object Net.WebClient).DownloadFile('%DOWNLOAD_URL%', '%WRAPPER_JAR%')"
136 | echo Finished downloading %WRAPPER_JAR%
137 | )
138 | @REM End of extension
139 |
140 | %MAVEN_JAVA_EXE% %JVM_CONFIG_MAVEN_PROPS% %MAVEN_OPTS% %MAVEN_DEBUG_OPTS% -classpath %WRAPPER_JAR% "-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%" %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*
141 | if ERRORLEVEL 1 goto error
142 | goto end
143 |
144 | :error
145 | set ERROR_CODE=1
146 |
147 | :end
148 | @endlocal & set ERROR_CODE=%ERROR_CODE%
149 |
150 | if not "%MAVEN_SKIP_RC%" == "" goto skipRcPost
151 | @REM check for post script, once with legacy .bat ending and once with .cmd ending
152 | if exist "%HOME%\mavenrc_post.bat" call "%HOME%\mavenrc_post.bat"
153 | if exist "%HOME%\mavenrc_post.cmd" call "%HOME%\mavenrc_post.cmd"
154 | :skipRcPost
155 |
156 | @REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'
157 | if "%MAVEN_BATCH_PAUSE%" == "on" pause
158 |
159 | if "%MAVEN_TERMINATE_CMD%" == "on" exit %ERROR_CODE%
160 |
161 | exit /B %ERROR_CODE%
162 |
--------------------------------------------------------------------------------
/apps/timer-producer/pom.xml:
--------------------------------------------------------------------------------
1 |
2 |
4 | 4.0.0
5 |
6 | org.springframework.boot
7 | spring-boot-starter-parent
8 | 2.1.2.RELEASE
9 |
10 |
11 | com.redhat.workshop.amqstreams
12 | time-producer
13 | 0.0.1-SNAPSHOT
14 | time-producer
15 | Demo project for Spring Boot
16 |
17 |
18 |
19 | UTF-8
20 | UTF-8
21 | 1.8
22 | 2.23.0
23 | kubernetes
24 |
25 |
26 |
27 |
28 |
29 | org.springframework.boot
30 | spring-boot-starter-web
31 |
32 |
33 | org.springframework.boot
34 | spring-boot-starter-tomcat
35 |
36 |
37 |
38 |
39 |
40 | org.springframework.boot
41 | spring-boot-starter-jetty
42 |
43 |
44 |
45 | org.apache.camel
46 | camel-spring-boot-starter
47 | ${camel.version}
48 |
49 |
50 |
51 | org.apache.camel
52 | camel-servlet-starter
53 | ${camel.version}
54 |
55 |
56 |
57 | org.apache.camel
58 | camel-kafka
59 | ${camel.version}
60 |
61 |
62 |
63 | org.apache.camel
64 | camel-kafka-starter
65 | ${camel.version}
66 |
67 |
68 |
69 | org.springframework.boot
70 | spring-boot-starter-test
71 | test
72 |
73 |
74 |
75 |
76 |
77 |
78 | org.springframework.boot
79 | spring-boot-maven-plugin
80 |
81 |
82 | io.fabric8
83 | fabric8-maven-plugin
84 |
85 |
86 |
87 | resource
88 | build
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
--------------------------------------------------------------------------------
/apps/timer-producer/src/main/java/com/redhat/workshop/amqstreams/timerproducer/TimerProducerApplication.java:
--------------------------------------------------------------------------------
1 | package com.redhat.workshop.amqstreams.timerproducer;
2 |
3 | import org.apache.camel.builder.RouteBuilder;
4 | import org.springframework.boot.SpringApplication;
5 | import org.springframework.boot.autoconfigure.SpringBootApplication;
6 | import org.springframework.context.annotation.Bean;
7 |
8 | @SpringBootApplication
9 | public class TimerProducerApplication {
10 |
11 | @Bean
12 | public RouteBuilder routeBuilder() {
13 | return new RouteBuilder() {
14 | @Override
15 | public void configure() throws Exception {
16 |
17 |
18 | from("timer:hello?period={{timer.period}}")
19 | .transform().simple("Message ${in.header.CamelTimerCounter} at ${in.header.CamelTimerFiredTime}")
20 | .log("Sent ${in.body}")
21 | .inOnly("kafka:{{output.topic}}");
22 |
23 |
24 | }
25 | };
26 | }
27 |
28 | public static void main(String[] args) {
29 | SpringApplication.run(TimerProducerApplication.class, args);
30 | }
31 |
32 | }
33 |
34 |
--------------------------------------------------------------------------------
/apps/timer-producer/src/main/resources/application.properties:
--------------------------------------------------------------------------------
1 | output.topic=lines
2 | application.seekTo=beginning
3 | timer.period=5000
4 | camel.component.kafka.configuration.brokers=camel.component.kafka.configuration.brokers=localhost:9092
--------------------------------------------------------------------------------
/bin/log-consumer.jar:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RedHat-Middleware-Workshops/workshop-amq-streams/e2c0a0e8e8b1be5b3d57e73a490f5e5640b90486/bin/log-consumer.jar
--------------------------------------------------------------------------------
/bin/timer-producer.jar:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/RedHat-Middleware-Workshops/workshop-amq-streams/e2c0a0e8e8b1be5b3d57e73a490f5e5640b90486/bin/timer-producer.jar
--------------------------------------------------------------------------------
/configurations/applications/log-consumer-secured.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: log-consumer
5 | labels:
6 | app: kafka-workshop
7 | spec:
8 | replicas: 1
9 | template:
10 | metadata:
11 | labels:
12 | app: kafka-workshop
13 | name: log-consumer
14 | spec:
15 | containers:
16 | - name: log-consumer
17 | image: docker.io/mbogoevici/log-consumer:latest
18 | env:
19 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
20 | value: "production-ready-kafka-bootstrap.amq-streams.svc:9092"
21 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_GROUP_ID
22 | value: secure-group
23 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SASL_JAAS_CONFIG
24 | value: org.apache.kafka.common.security.scram.ScramLoginModule required username='${KAFKA_USER}' password='${KAFKA_PASSWORD}';
25 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SASL_MECHANISM
26 | value: SCRAM-SHA-512
27 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SECURITY_PROTOCOL
28 | value: SASL_PLAINTEXT
29 | - name: KAFKA_USER
30 | value: secure-topic-reader
31 | - name: KAFKA_PASSWORD
32 | valueFrom:
33 | secretKeyRef:
34 | key: password
35 | name: secure-topic-reader
36 |
--------------------------------------------------------------------------------
/configurations/applications/log-consumer-target.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: log-consumer
5 | labels:
6 | app: kafka-workshop
7 | spec:
8 | replicas: 1
9 | template:
10 | metadata:
11 | labels:
12 | app: kafka-workshop
13 | name: log-consumer
14 | spec:
15 | containers:
16 | - name: log-consumer
17 | image: docker.io/mbogoevici/log-consumer:latest
18 | env:
19 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
20 | value: "production-ready-target-kafka-bootstrap.amq-streams.svc:9092"
21 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_GROUP_ID
22 | value: test-group
23 |
--------------------------------------------------------------------------------
/configurations/applications/log-consumer-team-1.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: log-consumer
5 | labels:
6 | app: kafka-workshop
7 | spec:
8 | replicas: 1
9 | template:
10 | metadata:
11 | labels:
12 | app: kafka-workshop
13 | name: log-consumer
14 | spec:
15 | containers:
16 | - name: log-consumer
17 | image: docker.io/mbogoevici/log-consumer:latest
18 | env:
19 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
20 | value: "production-ready-kafka-bootstrap.team-1.svc:9092"
21 |
--------------------------------------------------------------------------------
/configurations/applications/log-consumer-team-2.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: log-consumer
5 | labels:
6 | app: kafka-workshop
7 | spec:
8 | replicas: 1
9 | template:
10 | metadata:
11 | labels:
12 | app: kafka-workshop
13 | name: log-consumer
14 | spec:
15 | containers:
16 | - name: log-consumer
17 | image: docker.io/mbogoevici/log-consumer:latest
18 | env:
19 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
20 | value: "production-ready-kafka-bootstrap.team-2.svc:9092"
21 |
--------------------------------------------------------------------------------
/configurations/applications/log-consumer.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: log-consumer
5 | labels:
6 | app: kafka-workshop
7 | spec:
8 | replicas: 1
9 | template:
10 | metadata:
11 | labels:
12 | app: kafka-workshop
13 | name: log-consumer
14 | spec:
15 | containers:
16 | - name: log-consumer
17 | image: docker.io/mbogoevici/log-consumer:latest
18 | env:
19 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
20 | value: "production-ready-kafka-bootstrap.amq-streams.svc:9092"
21 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_GROUP_ID
22 | value: test-group
23 |
--------------------------------------------------------------------------------
/configurations/applications/timer-producer-secured.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: timer-producer
5 | labels:
6 | app: kafka-workshop
7 | spec:
8 | replicas: 1
9 | template:
10 | metadata:
11 | labels:
12 | app: kafka-workshop
13 | name: timer-producer
14 | spec:
15 | containers:
16 | - name: timer-producer
17 | image: docker.io/mbogoevici/timer-producer:latest
18 | env:
19 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
20 | value: "production-ready-kafka-bootstrap.amq-streams.svc:9092"
21 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SASL_JAAS_CONFIG
22 | value: org.apache.kafka.common.security.scram.ScramLoginModule required username='${KAFKA_USER}' password='${KAFKA_PASSWORD}';
23 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SASL_MECHANISM
24 | value: SCRAM-SHA-512
25 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SECURITY_PROTOCOL
26 | value: SASL_PLAINTEXT
27 | - name: KAFKA_USER
28 | value: secure-topic-writer
29 | - name: KAFKA_PASSWORD
30 | valueFrom:
31 | secretKeyRef:
32 | key: password
33 | name: secure-topic-writer
34 |
--------------------------------------------------------------------------------
/configurations/applications/timer-producer-team-1.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: timer-producer
5 | labels:
6 | app: kafka-workshop
7 | spec:
8 | replicas: 1
9 | template:
10 | metadata:
11 | labels:
12 | app: kafka-workshop
13 | name: timer-producer
14 | spec:
15 | containers:
16 | - name: timer-producer
17 | image: docker.io/mbogoevici/timer-producer:latest
18 | env:
19 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
20 | value: "production-ready-kafka-bootstrap.team-1.svc:9092"
21 |
--------------------------------------------------------------------------------
/configurations/applications/timer-producer.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: timer-producer
5 | labels:
6 | app: kafka-workshop
7 | spec:
8 | replicas: 1
9 | template:
10 | metadata:
11 | labels:
12 | app: kafka-workshop
13 | name: timer-producer
14 | spec:
15 | containers:
16 | - name: timer-producer
17 | image: docker.io/mbogoevici/timer-producer:latest
18 | env:
19 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
20 | value: "production-ready-kafka-bootstrap.amq-streams.svc:9092"
21 |
--------------------------------------------------------------------------------
/configurations/clusters/kafka-connect.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: KafkaConnect
3 | metadata:
4 | name: connect-cluster
5 | spec:
6 | replicas: 1
7 | bootstrapServers: production-ready-kafka-bootstrap.amq-streams.svc:9092
8 | authentication:
9 | type: scram-sha-512
10 | username: secure-topic-reader
11 | passwordSecret:
12 | secretName: secure-topic-reader
13 | password: password
14 |
--------------------------------------------------------------------------------
/configurations/clusters/mirror-maker-single-namespace.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: KafkaMirrorMaker
3 | metadata:
4 | name: mirror-maker
5 | spec:
6 | image: strimzi/kafka-mirror-maker:latest
7 | replicas: 1
8 | consumer:
9 | bootstrapServers: production-ready-kafka-bootstrap.amq-streams.svc:9092
10 | groupId: mirror-maker-group-id
11 | producer:
12 | bootstrapServers: production-ready-target-kafka-bootstrap.amq-streams.svc:9092
13 | whitelist: "lines|test-topic"
14 |
--------------------------------------------------------------------------------
/configurations/clusters/mirror-maker.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: KafkaMirrorMaker
3 | metadata:
4 | name: mirror-maker
5 | spec:
6 | image: strimzi/kafka-mirror-maker:latest
7 | replicas: 1
8 | consumer:
9 | bootstrapServers: production-ready-kafka-bootstrap.team-1.svc:9092
10 | groupId: mirror-maker-group-id
11 | producer:
12 | bootstrapServers: production-ready-kafka-bootstrap.team-2.svc:9092
13 | whitelist: "lines|test-topic"
14 |
--------------------------------------------------------------------------------
/configurations/clusters/production-ready-5-nodes.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: Kafka
3 | metadata:
4 | name: production-ready
5 | spec:
6 | kafka:
7 | replicas: 5
8 | listeners:
9 | plain: {}
10 | tls: {}
11 | config:
12 | offsets.topic.replication.factor: 3
13 | transaction.state.log.replication.factor: 3
14 | transaction.state.log.min.isr: 2
15 | storage:
16 | type: persistent-claim
17 | size: 3Gi
18 | deleteClaim: false
19 | zookeeper:
20 | replicas: 3
21 | storage:
22 | type: persistent-claim
23 | size: 1Gi
24 | deleteClaim: false
25 | entityOperator:
26 | topicOperator: {}
27 | userOperator: {}
28 |
--------------------------------------------------------------------------------
/configurations/clusters/production-ready-external-routes.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: Kafka
3 | metadata:
4 | name: production-ready
5 | spec:
6 | kafka:
7 | replicas: 3
8 | listeners:
9 | plain: {}
10 | tls: {}
11 | external:
12 | type: route
13 | config:
14 | offsets.topic.replication.factor: 3
15 | transaction.state.log.replication.factor: 3
16 | transaction.state.log.min.isr: 2
17 | storage:
18 | type: persistent-claim
19 | size: 3Gi
20 | deleteClaim: false
21 | zookeeper:
22 | replicas: 3
23 | storage:
24 | type: persistent-claim
25 | size: 1Gi
26 | deleteClaim: false
27 | entityOperator:
28 | topicOperator: {}
29 | userOperator: {}
30 |
--------------------------------------------------------------------------------
/configurations/clusters/production-ready-monitored.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: Kafka
3 | metadata:
4 | name: production-ready
5 | spec:
6 | kafka:
7 | replicas: 3
8 | listeners:
9 | plain: {}
10 | tls: {}
11 | readinessProbe:
12 | initialDelaySeconds: 15
13 | timeoutSeconds: 5
14 | livenessProbe:
15 | initialDelaySeconds: 15
16 | timeoutSeconds: 5
17 | config:
18 | offsets.topic.replication.factor: 3
19 | transaction.state.log.replication.factor: 3
20 | transaction.state.log.min.isr: 2
21 | storage:
22 | type: persistent-claim
23 | size: 3Gi
24 | deleteClaim: false
25 | metrics:
26 | # Inspired by config from Kafka 2.0.0 example rules:
27 | # https://github.com/prometheus/jmx_exporter/blob/master/example_configs/kafka-2_0_0.yml
28 | lowercaseOutputName: true
29 | rules:
30 | # Special cases and very specific rules
31 | - pattern : kafka.server<>Value
32 | name: kafka_server_$1_$2
33 | type: GAUGE
34 | labels:
35 | clientId: "$3"
36 | topic: "$4"
37 | partition: "$5"
38 | - pattern : kafka.server<>Value
39 | name: kafka_server_$1_$2
40 | type: GAUGE
41 | labels:
42 | clientId: "$3"
43 | broker: "$4:$5"
44 | # Some percent metrics use MeanRate attribute
45 | # Ex) kafka.server<>MeanRate
46 | - pattern: kafka.(\w+)<>MeanRate
47 | name: kafka_$1_$2_$3_percent
48 | type: GAUGE
49 | # Generic gauges for percents
50 | - pattern: kafka.(\w+)<>Value
51 | name: kafka_$1_$2_$3_percent
52 | type: GAUGE
53 | - pattern: kafka.(\w+)<>Value
54 | name: kafka_$1_$2_$3_percent
55 | type: GAUGE
56 | labels:
57 | "$4": "$5"
58 | # Generic per-second counters with 0-2 key/value pairs
59 | - pattern: kafka.(\w+)<>Count
60 | name: kafka_$1_$2_$3_total
61 | type: COUNTER
62 | labels:
63 | "$4": "$5"
64 | "$6": "$7"
65 | - pattern: kafka.(\w+)<>Count
66 | name: kafka_$1_$2_$3_total
67 | type: COUNTER
68 | labels:
69 | "$4": "$5"
70 | - pattern: kafka.(\w+)<>Count
71 | name: kafka_$1_$2_$3_total
72 | type: COUNTER
73 | # Generic gauges with 0-2 key/value pairs
74 | - pattern: kafka.(\w+)<>Value
75 | name: kafka_$1_$2_$3
76 | type: GAUGE
77 | labels:
78 | "$4": "$5"
79 | "$6": "$7"
80 | - pattern: kafka.(\w+)<>Value
81 | name: kafka_$1_$2_$3
82 | type: GAUGE
83 | labels:
84 | "$4": "$5"
85 | - pattern: kafka.(\w+)<>Value
86 | name: kafka_$1_$2_$3
87 | type: GAUGE
88 | # Emulate Prometheus 'Summary' metrics for the exported 'Histogram's.
89 | # Note that these are missing the '_sum' metric!
90 | - pattern: kafka.(\w+)<>Count
91 | name: kafka_$1_$2_$3_count
92 | type: COUNTER
93 | labels:
94 | "$4": "$5"
95 | "$6": "$7"
96 | - pattern: kafka.(\w+)<>(\d+)thPercentile
97 | name: kafka_$1_$2_$3
98 | type: GAUGE
99 | labels:
100 | "$4": "$5"
101 | "$6": "$7"
102 | quantile: "0.$8"
103 | - pattern: kafka.(\w+)<>Count
104 | name: kafka_$1_$2_$3_count
105 | type: COUNTER
106 | labels:
107 | "$4": "$5"
108 | - pattern: kafka.(\w+)<>(\d+)thPercentile
109 | name: kafka_$1_$2_$3
110 | type: GAUGE
111 | labels:
112 | "$4": "$5"
113 | quantile: "0.$6"
114 | - pattern: kafka.(\w+)<>Count
115 | name: kafka_$1_$2_$3_count
116 | type: COUNTER
117 | - pattern: kafka.(\w+)<>(\d+)thPercentile
118 | name: kafka_$1_$2_$3
119 | type: GAUGE
120 | labels:
121 | quantile: "0.$4"
122 | zookeeper:
123 | replicas: 3
124 | readinessProbe:
125 | initialDelaySeconds: 15
126 | timeoutSeconds: 5
127 | livenessProbe:
128 | initialDelaySeconds: 15
129 | timeoutSeconds: 5
130 | storage:
131 | type: persistent-claim
132 | size: 1Gi
133 | deleteClaim: false
134 | metrics:
135 | # Inspired by Zookeeper rules
136 | # https://github.com/prometheus/jmx_exporter/blob/master/example_configs/zookeeper.yaml
137 | lowercaseOutputName: true
138 | rules:
139 | # replicated Zookeeper
140 | - pattern: "org.apache.ZooKeeperService<>(\\w+)"
141 | name: "zookeeper_$2"
142 | - pattern: "org.apache.ZooKeeperService<>(\\w+)"
143 | name: "zookeeper_$3"
144 | labels:
145 | replicaId: "$2"
146 | - pattern: "org.apache.ZooKeeperService<>(\\w+)"
147 | name: "zookeeper_$4"
148 | labels:
149 | replicaId: "$2"
150 | memberType: "$3"
151 | - pattern: "org.apache.ZooKeeperService<>(\\w+)"
152 | name: "zookeeper_$4_$5"
153 | labels:
154 | replicaId: "$2"
155 | memberType: "$3"
156 | # standalone Zookeeper
157 | - pattern: "org.apache.ZooKeeperService<>(\\w+)"
158 | name: "zookeeper_$2"
159 | - pattern: "org.apache.ZooKeeperService<>(\\w+)"
160 | name: "zookeeper_$2_$3"
161 | entityOperator:
162 | topicOperator: {}
163 | userOperator: {}
164 |
--------------------------------------------------------------------------------
/configurations/clusters/production-ready-secured.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: Kafka
3 | metadata:
4 | name: production-ready
5 | spec:
6 | kafka:
7 | replicas: 3
8 | listeners:
9 | plain:
10 | authentication:
11 | type: scram-sha-512
12 | tls: {}
13 | config:
14 | offsets.topic.replication.factor: 3
15 | transaction.state.log.replication.factor: 3
16 | transaction.state.log.min.isr: 2
17 | storage:
18 | type: persistent-claim
19 | size: 3Gi
20 | deleteClaim: false
21 | zookeeper:
22 | replicas: 3
23 | storage:
24 | type: persistent-claim
25 | size: 1Gi
26 | deleteClaim: false
27 | entityOperator:
28 | topicOperator: {}
29 | userOperator: {}
30 |
--------------------------------------------------------------------------------
/configurations/clusters/production-ready-target.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: Kafka
3 | metadata:
4 | name: production-ready-target
5 | spec:
6 | kafka:
7 | replicas: 3
8 | listeners:
9 | plain: {}
10 | tls: {}
11 | config:
12 | offsets.topic.replication.factor: 3
13 | transaction.state.log.replication.factor: 3
14 | transaction.state.log.min.isr: 2
15 | storage:
16 | type: persistent-claim
17 | size: 3Gi
18 | deleteClaim: false
19 | zookeeper:
20 | replicas: 3
21 | storage:
22 | type: persistent-claim
23 | size: 1Gi
24 | deleteClaim: false
25 | entityOperator:
26 | topicOperator: {}
27 | userOperator: {}
28 |
--------------------------------------------------------------------------------
/configurations/clusters/production-ready.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: Kafka
3 | metadata:
4 | name: production-ready
5 | spec:
6 | kafka:
7 | replicas: 3
8 | listeners:
9 | plain: {}
10 | tls: {}
11 | config:
12 | offsets.topic.replication.factor: 3
13 | transaction.state.log.replication.factor: 3
14 | transaction.state.log.min.isr: 2
15 | storage:
16 | type: persistent-claim
17 | size: 3Gi
18 | deleteClaim: false
19 | zookeeper:
20 | replicas: 3
21 | storage:
22 | type: persistent-claim
23 | size: 1Gi
24 | deleteClaim: false
25 | entityOperator:
26 | topicOperator: {}
27 | userOperator: {}
28 |
--------------------------------------------------------------------------------
/configurations/clusters/simple-cluster-affinity.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: Kafka
3 | metadata:
4 | name: simple-cluster
5 | spec:
6 | kafka:
7 | replicas: 1
8 | listeners:
9 | plain: {}
10 | tls: {}
11 | config:
12 | offsets.topic.replication.factor: 1
13 | transaction.state.log.replication.factor: 1
14 | transaction.state.log.min.isr: 1
15 | storage:
16 | type: persistent-claim
17 | size: 3Gi
18 | deleteClaim: false
19 | affinity:
20 | nodeAffinity:
21 | requiredDuringSchedulingIgnoredDuringExecution:
22 | nodeSelectorTerms:
23 | - matchExpressions:
24 | - key: kubernetes.io/hostname
25 | operator: In
26 | values:
27 | - node00.example.com
28 | zookeeper:
29 | replicas: 1
30 | storage:
31 | type: persistent-claim
32 | size: 1Gi
33 | deleteClaim: false
34 | entityOperator:
35 | topicOperator: {}
36 | userOperator: {}
37 |
--------------------------------------------------------------------------------
/configurations/clusters/simple-cluster.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: Kafka
3 | metadata:
4 | name: simple-cluster
5 | spec:
6 | kafka:
7 | replicas: 1
8 | listeners:
9 | plain: {}
10 | tls: {}
11 | config:
12 | offsets.topic.replication.factor: 1
13 | transaction.state.log.replication.factor: 1
14 | transaction.state.log.min.isr: 1
15 | storage:
16 | type: ephemeral
17 | zookeeper:
18 | replicas: 1
19 | storage:
20 | type: ephemeral
21 | entityOperator:
22 | topicOperator: {}
23 | userOperator: {}
24 |
--------------------------------------------------------------------------------
/configurations/metrics/grafana.yaml:
--------------------------------------------------------------------------------
1 | # This is not a recommended configuration, and further support should be available
2 | # from the Prometheus and Grafana communities.
3 |
4 | apiVersion: extensions/v1beta1
5 | kind: Deployment
6 | metadata:
7 | name: grafana
8 | spec:
9 | replicas: 1
10 | template:
11 | metadata:
12 | labels:
13 | name: grafana
14 | spec:
15 | containers:
16 | - name: grafana
17 | image: strimzilab/grafana-openshift:latest
18 | imagePullPolicy: IfNotPresent
19 | ports:
20 | - name: grafana
21 | containerPort: 3000
22 | protocol: TCP
23 | volumeMounts:
24 | - name: grafana-data
25 | mountPath: /var/lib/grafana
26 | - name: grafana-logs
27 | mountPath: /var/log/grafana
28 | volumes:
29 | - name: grafana-data
30 | emptyDir: {}
31 | - name: grafana-logs
32 | emptyDir: {}
33 | ---
34 | apiVersion: v1
35 | kind: Service
36 | metadata:
37 | name: grafana
38 | spec:
39 | ports:
40 | - name: grafana
41 | port: 3000
42 | targetPort: 3000
43 | protocol: TCP
44 | selector:
45 | name: grafana
46 | type: ClusterIP
47 |
--------------------------------------------------------------------------------
/configurations/metrics/prometheus.yaml:
--------------------------------------------------------------------------------
1 | # This is not a recommended configuration, and further support should be available
2 | # from the Prometheus and Grafana communities. The main purpose of this file is to
3 | # provide a Prometheus server configuration that is compatible with the Strimzi
4 | # Grafana dashboards.
5 |
6 | apiVersion: rbac.authorization.k8s.io/v1
7 | kind: ClusterRole
8 | metadata:
9 | name: prometheus-server
10 | rules:
11 | - apiGroups: [""]
12 | resources:
13 | - nodes
14 | - nodes/proxy
15 | - services
16 | - endpoints
17 | - pods
18 | verbs: ["get", "list", "watch"]
19 | - apiGroups:
20 | - extensions
21 | resources:
22 | - ingresses
23 | verbs: ["get", "list", "watch"]
24 | - nonResourceURLs: ["/metrics"]
25 | verbs: ["get"]
26 |
27 | ---
28 | apiVersion: v1
29 | kind: ServiceAccount
30 | metadata:
31 | name: prometheus-server
32 |
33 | ---
34 | apiVersion: rbac.authorization.k8s.io/v1
35 | kind: ClusterRoleBinding
36 | metadata:
37 | name: prometheus-server
38 | roleRef:
39 | apiGroup: rbac.authorization.k8s.io
40 | kind: ClusterRole
41 | name: prometheus-server
42 | subjects:
43 | - kind: ServiceAccount
44 | name: prometheus-server
45 | namespace: amq-streams
46 |
47 | ---
48 | apiVersion: extensions/v1beta1
49 | kind: Deployment
50 | metadata:
51 | name: prometheus
52 | spec:
53 | replicas: 1
54 | template:
55 | metadata:
56 | labels:
57 | name: prometheus
58 | spec:
59 | serviceAccount: prometheus-server
60 | containers:
61 | - name: prometheus
62 | image: prom/prometheus:v2.4.0
63 | imagePullPolicy: IfNotPresent
64 | ports:
65 | - name: prometheus
66 | containerPort: 9090
67 | protocol: TCP
68 | volumeMounts:
69 | - mountPath: /prometheus
70 | name: prometheus-data
71 | - mountPath: /etc/prometheus
72 | name: prometheus-config
73 | volumes:
74 | - name: prometheus-data
75 | emptyDir: {}
76 | - name: prometheus-config
77 | configMap:
78 | name: prometheus-config
79 |
80 | ---
81 | apiVersion: v1
82 | kind: ConfigMap
83 | metadata:
84 | name: prometheus-config
85 | data:
86 | prometheus.yml: |
87 | global:
88 | scrape_interval: 10s
89 | scrape_timeout: 10s
90 | evaluation_interval: 10s
91 | alerting:
92 | alertmanagers:
93 | - kubernetes_sd_configs:
94 | - api_server: null
95 | role: pod
96 | namespaces:
97 | names: []
98 | scheme: http
99 | timeout: 10s
100 | relabel_configs:
101 | - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_pod_label_app,
102 | __meta_kubernetes_pod_label_component, __meta_kubernetes_pod_container_port_number]
103 | separator: ;
104 | regex: lightbend;prometheus;alertmanager;[0-9]+
105 | replacement: $1
106 | action: keep
107 | rule_files:
108 | - /etc/config/rules/*.yaml
109 | scrape_configs:
110 | - job_name: kubernetes-cadvisor
111 | honor_labels: true
112 | scrape_interval: 10s
113 | scrape_timeout: 10s
114 | metrics_path: /metrics
115 | scheme: https
116 | kubernetes_sd_configs:
117 | - api_server: null
118 | role: node
119 | namespaces:
120 | names: []
121 | bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
122 | tls_config:
123 | ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
124 | insecure_skip_verify: true
125 | relabel_configs:
126 | - separator: ;
127 | regex: __meta_kubernetes_node_label_(.+)
128 | replacement: $1
129 | action: labelmap
130 | - separator: ;
131 | regex: (.*)
132 | target_label: __address__
133 | replacement: kubernetes.default.svc:443
134 | action: replace
135 | - source_labels: [__meta_kubernetes_node_name]
136 | separator: ;
137 | regex: (.+)
138 | target_label: __metrics_path__
139 | replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
140 | action: replace
141 | - source_labels: [__meta_kubernetes_node_name]
142 | separator: ;
143 | regex: (.*)
144 | target_label: node_name
145 | replacement: $1
146 | action: replace
147 | - source_labels: [__meta_kubernetes_node_address_InternalIP]
148 | separator: ;
149 | regex: (.*)
150 | target_label: node_ip
151 | replacement: $1
152 | action: replace
153 | metric_relabel_configs:
154 | - source_labels: [pod_name]
155 | separator: ;
156 | regex: (.*)
157 | target_label: kubernetes_pod_name
158 | replacement: $1
159 | action: replace
160 | - separator: ;
161 | regex: pod_name
162 | replacement: $1
163 | action: labeldrop
164 | - source_labels: [container_name, __name__]
165 | separator: ;
166 | regex: POD;container_(network).*
167 | target_label: container_name
168 | replacement: $1
169 | action: replace
170 | - source_labels: [container_name]
171 | separator: ;
172 | regex: POD
173 | replacement: $1
174 | action: drop
175 | - source_labels: [container_name]
176 | separator: ;
177 | regex: ^$
178 | replacement: $1
179 | action: drop
180 | - source_labels: [__name__]
181 | separator: ;
182 | regex: container_(network_tcp_usage_total|tasks_state|cpu_usage_seconds_total|memory_failures_total|network_udp_usage_total)
183 | replacement: $1
184 | action: drop
185 | - job_name: kube-state-metrics
186 | honor_labels: true
187 | scrape_interval: 10s
188 | scrape_timeout: 10s
189 | metrics_path: /metrics
190 | scheme: http
191 | kubernetes_sd_configs:
192 | - api_server: null
193 | role: endpoints
194 | namespaces:
195 | names: []
196 | relabel_configs:
197 | - source_labels: [__meta_kubernetes_endpoints_name]
198 | separator: ;
199 | regex: prometheus-kube-state-metrics
200 | replacement: $1
201 | action: keep
202 | - separator: ;
203 | regex: __meta_kubernetes_service_label_(.+)
204 | replacement: $1
205 | action: labelmap
206 | - source_labels: [__meta_kubernetes_namespace]
207 | separator: ;
208 | regex: (.*)
209 | target_label: namespace
210 | replacement: $1
211 | action: replace
212 | - source_labels: [__meta_kubernetes_namespace]
213 | separator: ;
214 | regex: (.*)
215 | target_label: kubernetes_namespace
216 | replacement: $1
217 | action: replace
218 | - source_labels: [__meta_kubernetes_service_name]
219 | separator: ;
220 | regex: (.*)
221 | target_label: kubernetes_name
222 | replacement: $1
223 | action: replace
224 | - source_labels: [__meta_kubernetes_pod_node_name]
225 | separator: ;
226 | regex: (.*)
227 | target_label: node_name
228 | replacement: $1
229 | action: replace
230 | - source_labels: [__meta_kubernetes_pod_host_ip]
231 | separator: ;
232 | regex: (.*)
233 | target_label: node_ip
234 | replacement: $1
235 | action: replace
236 | - job_name: node-exporter
237 | honor_labels: true
238 | scrape_interval: 10s
239 | scrape_timeout: 10s
240 | metrics_path: /metrics
241 | scheme: http
242 | kubernetes_sd_configs:
243 | - api_server: null
244 | role: endpoints
245 | namespaces:
246 | names: []
247 | relabel_configs:
248 | - source_labels: [__meta_kubernetes_endpoints_name]
249 | separator: ;
250 | regex: prometheus-node-exporter
251 | replacement: $1
252 | action: keep
253 | - separator: ;
254 | regex: __meta_kubernetes_service_label_(.+)
255 | replacement: $1
256 | action: labelmap
257 | - source_labels: [__meta_kubernetes_namespace]
258 | separator: ;
259 | regex: (.*)
260 | target_label: namespace
261 | replacement: $1
262 | action: replace
263 | - source_labels: [__meta_kubernetes_namespace]
264 | separator: ;
265 | regex: (.*)
266 | target_label: kubernetes_namespace
267 | replacement: $1
268 | action: replace
269 | - source_labels: [__meta_kubernetes_service_name]
270 | separator: ;
271 | regex: (.*)
272 | target_label: kubernetes_name
273 | replacement: $1
274 | action: replace
275 | - source_labels: [__meta_kubernetes_pod_node_name]
276 | separator: ;
277 | regex: (.*)
278 | target_label: node_name
279 | replacement: $1
280 | action: replace
281 | - source_labels: [__meta_kubernetes_pod_host_ip]
282 | separator: ;
283 | regex: (.*)
284 | target_label: node_ip
285 | replacement: $1
286 | action: replace
287 | - job_name: kubernetes-services
288 | honor_labels: true
289 | scrape_interval: 10s
290 | scrape_timeout: 10s
291 | metrics_path: /metrics
292 | scheme: http
293 | kubernetes_sd_configs:
294 | - api_server: null
295 | role: endpoints
296 | namespaces:
297 | names: []
298 | relabel_configs:
299 | - source_labels: [__meta_kubernetes_endpoints_name]
300 | separator: ;
301 | regex: prometheus-node-exporter
302 | replacement: $1
303 | action: drop
304 | - source_labels: [__meta_kubernetes_endpoints_name]
305 | separator: ;
306 | regex: prometheus-kube-state-metrics
307 | replacement: $1
308 | action: drop
309 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
310 | separator: ;
311 | regex: "true"
312 | replacement: $1
313 | action: keep
314 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
315 | separator: ;
316 | regex: (https?)
317 | target_label: __scheme__
318 | replacement: $1
319 | action: replace
320 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
321 | separator: ;
322 | regex: (.+)
323 | target_label: __metrics_path__
324 | replacement: $1
325 | action: replace
326 | - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
327 | separator: ;
328 | regex: (.+)(?::\d+);(\d+)
329 | target_label: __address__
330 | replacement: $1:$2
331 | action: replace
332 | - separator: ;
333 | regex: __meta_kubernetes_service_label_(.+)
334 | replacement: $1
335 | action: labelmap
336 | - source_labels: [__meta_kubernetes_namespace]
337 | separator: ;
338 | regex: (.*)
339 | target_label: namespace
340 | replacement: $1
341 | action: replace
342 | - source_labels: [__meta_kubernetes_namespace]
343 | separator: ;
344 | regex: (.*)
345 | target_label: kubernetes_namespace
346 | replacement: $1
347 | action: replace
348 | - source_labels: [__meta_kubernetes_service_name]
349 | separator: ;
350 | regex: (.*)
351 | target_label: kubernetes_name
352 | replacement: $1
353 | action: replace
354 | - source_labels: [__meta_kubernetes_pod_node_name]
355 | separator: ;
356 | regex: (.*)
357 | target_label: node_name
358 | replacement: $1
359 | action: replace
360 | - source_labels: [__meta_kubernetes_pod_host_ip]
361 | separator: ;
362 | regex: (.*)
363 | target_label: node_ip
364 | replacement: $1
365 | action: replace
366 | - separator: ;
367 | regex: __meta_kubernetes_pod_label_(.+)
368 | replacement: $1
369 | action: labelmap
370 | - source_labels: [__meta_kubernetes_pod_name]
371 | separator: ;
372 | regex: (.*)
373 | target_label: kubernetes_pod_name
374 | replacement: $1
375 | action: replace
376 | - job_name: kubernetes-pods
377 | honor_labels: true
378 | scrape_interval: 10s
379 | scrape_timeout: 10s
380 | metrics_path: /metrics
381 | scheme: http
382 | kubernetes_sd_configs:
383 | - api_server: null
384 | role: pod
385 | namespaces:
386 | names: []
387 | relabel_configs:
388 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
389 | separator: ;
390 | regex: "true"
391 | replacement: $1
392 | action: keep
393 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
394 | separator: ;
395 | regex: (.+)
396 | target_label: __metrics_path__
397 | replacement: $1
398 | action: replace
399 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_container_port_name]
400 | separator: ;
401 | regex: ^(.+;.*)|(;.*metrics)$
402 | replacement: $1
403 | action: keep
404 | - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
405 | separator: ;
406 | regex: (.+):(?:\d+);(\d+)
407 | target_label: __address__
408 | replacement: ${1}:${2}
409 | action: replace
410 | - separator: ;
411 | regex: __meta_kubernetes_pod_label_(.+)
412 | replacement: $1
413 | action: labelmap
414 | - source_labels: [__meta_kubernetes_namespace]
415 | separator: ;
416 | regex: (.*)
417 | target_label: namespace
418 | replacement: $1
419 | action: replace
420 | - source_labels: [__meta_kubernetes_pod_name]
421 | separator: ;
422 | regex: (.*)
423 | target_label: kubernetes_pod_name
424 | replacement: $1
425 | action: replace
426 | - source_labels: [__meta_kubernetes_pod_node_name]
427 | separator: ;
428 | regex: (.*)
429 | target_label: node_name
430 | replacement: $1
431 | action: replace
432 | - source_labels: [__meta_kubernetes_pod_host_ip]
433 | separator: ;
434 | regex: (.*)
435 | target_label: node_ip
436 | replacement: $1
437 | action: replace
438 |
439 | ---
440 | apiVersion: v1
441 | kind: Service
442 | metadata:
443 | name: prometheus
444 | spec:
445 | ports:
446 | - name: prometheus
447 | port: 9090
448 | targetPort: 9090
449 | protocol: TCP
450 | selector:
451 | name: prometheus
452 | type: ClusterIP
453 |
--------------------------------------------------------------------------------
/configurations/topics/lines-10-target.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: KafkaTopic
3 | metadata:
4 | name: lines-target
5 | labels:
6 | strimzi.io/cluster: production-ready-target
7 | spec:
8 | topicName: lines
9 | partitions: 10
10 | replicas: 2
11 | config:
12 | retention.ms: 14400000
13 | segment.bytes: 1073741824
14 |
--------------------------------------------------------------------------------
/configurations/topics/lines-10.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: KafkaTopic
3 | metadata:
4 | name: lines
5 | labels:
6 | strimzi.io/cluster: production-ready
7 | spec:
8 | partitions: 10
9 | replicas: 2
10 | config:
11 | retention.ms: 14400000
12 | segment.bytes: 1073741824
13 |
--------------------------------------------------------------------------------
/configurations/topics/lines.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: KafkaTopic
3 | metadata:
4 | name: lines
5 | labels:
6 | strimzi.io/cluster: production-ready
7 | spec:
8 | partitions: 2
9 | replicas: 2
10 | config:
11 | retention.ms: 7200000
12 | segment.bytes: 1073741824
13 |
--------------------------------------------------------------------------------
/configurations/users/secure-topic-reader.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: KafkaUser
3 | metadata:
4 | name: secure-topic-reader
5 | labels:
6 | strimzi.io/cluster: production-ready
7 | spec:
8 | authentication:
9 | type: scram-sha-512
10 | authorization:
11 | type: simple
12 | acls:
13 | # Example consumer Acls for topic lines using consumer group secure-group
14 | - resource:
15 | type: topic
16 | name: lines
17 | patternType: literal
18 | operation: Read
19 | host: "*"
20 | - resource:
21 | type: topic
22 | name: lines
23 | patternType: literal
24 | operation: Describe
25 | host: "*"
26 | - resource:
27 | type: group
28 | name: secure-group
29 | patternType: literal
30 | operation: Read
31 | host: "*"
32 |
--------------------------------------------------------------------------------
/configurations/users/secure-topic-writer.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kafka.strimzi.io/v1alpha1
2 | kind: KafkaUser
3 | metadata:
4 | name: secure-topic-writer
5 | labels:
6 | strimzi.io/cluster: production-ready
7 | spec:
8 | authentication:
9 | type: scram-sha-512
10 | authorization:
11 | type: simple
12 | acls:
13 | # Example Producer Acls for topic secure-topic
14 | - resource:
15 | type: topic
16 | name: lines
17 | patternType: literal
18 | operation: Write
19 | host: "*"
20 |
--------------------------------------------------------------------------------
/configurations/users/strimzi-admin.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: rbac.authorization.k8s.io/v1beta1
2 | kind: ClusterRole
3 | metadata:
4 | name: strimzi-admin
5 | rules:
6 | - apiGroups:
7 | - "kafka.strimzi.io"
8 | resources:
9 | - kafkas
10 | - kafkaconnects
11 | - kafkaconnects2is
12 | - kafkamirrormakers
13 | - kafkausers
14 | - kafkatopics
15 | verbs:
16 | - get
17 | - list
18 | - watch
19 | - create
20 | - delete
21 | - patch
22 | - update
23 |
--------------------------------------------------------------------------------
/labs/0-to-60.adoc:
--------------------------------------------------------------------------------
1 | == AMQ Streams on OpenShift from 0 to 60
2 |
3 | In this module you will learn how to install AMQ Streams on OpenShift.
4 |
5 | === AMQ Streams installation files
6 |
7 | All the components necessary for installing AMQ Streams have been downloaded and extracted locally.
8 | Once you log into the workstation machine, you should find a directory named `kafka`.
9 | Let's navigate to the folder.
10 |
11 | ----
12 | cd kafka
13 | ----
14 |
15 | This directory contains three items:
16 |
17 | * the unextracted zip file named `install_and_examples.zip`
18 | * the contents of the zip file, as two folders: `install` and `examples`
19 |
20 | [NOTE]
21 | .Where does the zip file come from?
22 | The installation files for AMQ Streams can be downloaded from link:https://access.redhat.com/node/3596931/423/1[this location].
23 | You can find this information in the reference documentation https://access.redhat.com/documentation/en-us/red_hat_amq/7.2/html-single/using_amq_streams_on_openshift_container_platform/index#downloads-str[here].
24 | The machines provisioned for the lab have downloaded and extracted this fine already.
25 |
26 | Verify that this is the case by listing the contents of the directory.
27 | Now we can start the lab.
28 |
29 | === Creating a new project for running the cluster operator
30 |
31 | Log in as the administrator with the password supplied by the instructor.
32 |
33 | oc login -u admin master00.example.com
34 |
35 | A project named `amq-streams` should already exist and should be current for your user.
36 | If it does not exist, you can create it.
37 |
38 | oc new-project amq-streams
39 |
40 | If it does already exist, be sure that it is the current project working on.
41 |
42 | oc project amq-streams
43 |
44 | === Configuring the cluster operator and installing it
45 |
46 | The configuration files for the cluster operator are available in the `install/cluster-operator` folder.
47 | Let's take a quick look.
48 |
49 | ----
50 | $ ls install/cluster-operator
51 | 010-ServiceAccount-strimzi-cluster-operator.yaml
52 | 020-ClusterRole-strimzi-cluster-operator-role.yaml
53 | 020-RoleBinding-strimzi-cluster-operator.yaml
54 | 021-ClusterRole-strimzi-cluster-operator-role.yaml
55 | 021-ClusterRoleBinding-strimzi-cluster-operator.yaml
56 | 030-ClusterRole-strimzi-kafka-broker.yaml
57 | 030-ClusterRoleBinding-strimzi-cluster-operator-kafka-broker-delegation.yaml
58 | 031-ClusterRole-strimzi-entity-operator.yaml
59 | 031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml
60 | 032-ClusterRole-strimzi-topic-operator.yaml
61 | 032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml
62 | 040-Crd-kafka.yaml
63 | 041-Crd-kafkaconnect.yaml
64 | 042-Crd-kafkaconnects2i.yaml
65 | 043-Crd-kafkatopic.yaml
66 | 044-Crd-kafkauser.yaml
67 | 045-Crd-kafkamirrormaker.yaml
68 | 050-Deployment-strimzi-cluster-operator.yaml
69 | ----
70 |
71 | We will not get into details about the structure of the files, but, for now, it is important to understand that, taken together, they are the complete set of resources required to set up AMQ Streams on an OpenShift cluster.
72 | The files include:
73 |
74 | * service account
75 | * cluster roles and and bindings
76 | * a set of CRDs (Custom Resource Definitions) for the objects managed by the AMQ Streams cluster operator
77 | * the cluster operator Deployment
78 |
79 | Prior to installing the cluster operator, we will need to configure the namespaces it operates with.
80 | We will do this by modifying the `*RoleBinding*.yaml` files to point to the newly created project `amq-streams`.
81 | You can do this by simply editing all files via `sed`.
82 |
83 | ----
84 | sed -i 's/namespace: .*/namespace: amq-streams/' install/cluster-operator/*RoleBinding*.yaml
85 | ----
86 |
87 | Let's take a look at the result in one of the modified files:
88 |
89 | ----
90 | more install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml
91 | ----
92 |
93 | The output should look like this:
94 |
95 | ----
96 | apiVersion: rbac.authorization.k8s.io/v1beta1
97 | kind: RoleBinding
98 | metadata:
99 | name: strimzi-cluster-operator
100 | labels:
101 | app: strimzi
102 | subjects:
103 | - kind: ServiceAccount
104 | name: strimzi-cluster-operator
105 | namespace: amq-streams
106 | roleRef:
107 | kind: ClusterRole
108 | name: strimzi-cluster-operator-namespaced
109 | apiGroup: rbac.authorization.k8s.io
110 | ----
111 |
112 | Notice the `amq-streams` value configured for the `subjects.namespace` property.
113 | You can check the other `*RoleBinding*.yaml` files for similar changes.
114 |
115 | Now that the configuration files have been set, we can proceed with installing the cluster operator.
116 |
117 | === Installing the cluster operator
118 |
119 | Once the configuration files are changed, you can install the cluster operator:
120 |
121 | ----
122 | oc apply -f install/cluster-operator
123 | ----
124 |
125 | For visualizing the result, log into the OpenShift console with the `admin` user.
126 | Navigate to the `amq-streams` project and visualize the current deployments.
127 | You should see the `strimzi-cluster-operator` running.
128 | You have just deployed the cluster operator.
129 |
130 | === Creating an Apache Kafka cluster
131 |
132 | It is time to start an Apache Kafka cluster.
133 | We will create now the most basic cluster possible.
134 | The configuration file is https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/simple-cluster.yaml[here].
135 | You can open it - it looks like this:
136 |
137 | ----
138 | apiVersion: kafka.strimzi.io/v1alpha1
139 | kind: Kafka
140 | metadata:
141 | name: simple-cluster
142 | spec:
143 | kafka:
144 | replicas: 1
145 | listeners:
146 | plain: {}
147 | tls: {}
148 | config:
149 | offsets.topic.replication.factor: 1
150 | transaction.state.log.replication.factor: 1
151 | transaction.state.log.min.isr: 1
152 | storage:
153 | type: ephemeral
154 | zookeeper:
155 | replicas: 1
156 | storage:
157 | type: ephemeral
158 | entityOperator:
159 | topicOperator: {}
160 | userOperator: {}
161 | ----
162 |
163 | Now let's create the cluster by deploying this new custom resource:
164 | ----
165 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/simple-cluster.yaml
166 | ----
167 |
168 | Again, follow the deployment from the OpenShift console.
169 | You should see three separate deployments:
170 |
171 | * `simple-cluster-zookeeper` - a stateful set containing the Zookeeper ensemble
172 | * `simple-cluster-kafka` - a stateful set containing the Kafka cluster
173 | * `simple-cluster-entity-operator` - a deployment containing the entity operator for managing topics and users
174 |
175 | === Testing the deployment
176 |
177 | Now, let's quickly test that the deployed Kafka cluster works.
178 | Let's log into one of the cluster pods:
179 |
180 | ----
181 | $ oc rsh simple-cluster-kafka-0
182 | ----
183 |
184 | Next, let's start a producer:
185 |
186 | ----
187 | $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic
188 | ----
189 |
190 | Once the console producer is started, enter a few values:
191 |
192 | ----
193 | > test
194 | > test2
195 | ----
196 |
197 | (Do not worry if you see the warnings below.
198 | They are part of the interaction and indicate that the topic has not been found and broker will autocreate the `test-topic`.
199 | The message `test` will be properly received by Kafka).
200 |
201 | ----
202 | OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
203 | >test
204 | [2019-02-05 15:32:46,828] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
205 | [2019-02-05 15:32:46,939] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 3 : {test-topic=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
206 | >test2
207 | ----
208 |
209 | Now let's open another terminal into the cluster pod in a separate terminal (open another `ssh` terminal into the workstation):
210 |
211 | ----
212 | $ oc rsh simple-cluster-kafka-0
213 | ----
214 |
215 | And let's start a consumer:
216 |
217 | ----
218 | bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic --from-beginning
219 | ----
220 |
221 | Once the consumer is started, you should see the previously sent messages in the output.
222 | Reverting to the terminal where we started the console producer and sending any new messages there will result in those messages being displayed in the consumer terminal.
223 |
224 | Now let's stop both producer and consumer applications with `CTRL-C` and then exit from the terminal of both containers.
225 |
226 | ----
227 | exit
228 | ----
229 |
230 | === Kafka clusters and Kafka resources
231 |
232 | The Kafka resource we just created is a representation of the running Kafka cluster.
233 | You can use it to inspect and modify the current cluster configuration.
234 | For example:
235 |
236 | ----
237 | oc get kafka simple-cluster -o yaml
238 | ----
239 |
240 | Will yield a detailed representation of the resource on the cluster:
241 |
242 | ----
243 | apiVersion: kafka.strimzi.io/v1alpha1
244 | kind: Kafka
245 | metadata:
246 | annotations:
247 | kubectl.kubernetes.io/last-applied-configuration: |
248 | {"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"Kafka","metadata":{"annotations":{},"name":"simple-cluster","namespace":"amq-streams"},"spec":{"entityOperator":{"topicOperator":{},"userOperator":{}},"kafka":{"config":{"offsets.topic.replication.factor":1,"transaction.state.log.min.isr":1,"transaction.state.log.replication.factor":1},"listeners":{"plain":{},"tls":{}},"replicas":1,"storage":{"type":"ephemeral"}},"zookeeper":{"replicas":1,"storage":{"type":"ephemeral"}}}}
249 | creationTimestamp: 2019-02-05T15:27:11Z
250 | generation: 1
251 | name: simple-cluster
252 | namespace: amq-streams
253 | resourceVersion: "136009"
254 | selfLink: /apis/kafka.strimzi.io/v1alpha1/namespaces/amq-streams/kafkas/simple-cluster
255 | uid: 81e3ddbe-295a-11e9-bbf1-2cabcdef0010
256 | spec:
257 | entityOperator:
258 | topicOperator: {}
259 | userOperator: {}
260 | kafka:
261 | config:
262 | offsets.topic.replication.factor: 1
263 | transaction.state.log.min.isr: 1
264 | transaction.state.log.replication.factor: 1
265 | listeners:
266 | plain: {}
267 | tls: {}
268 | replicas: 1
269 | storage:
270 | type: ephemeral
271 | zookeeper:
272 | replicas: 1
273 | storage:
274 | type: ephemeral
275 | ----
276 |
277 | Finally, let's delete the Kafka cluster.
278 | We will replace it with a configuration that is more appropriate for real world use cases.
279 |
280 | ----
281 | oc delete kafka simple-cluster
282 | ----
283 |
284 | === Conclusion
285 |
286 | In this workshop module, you have:
287 |
288 | * Configured and Installed AMQ Streams
289 | * Deployed a simple Kafka cluster
290 | * Run a producer and consumer to validate the settings
291 |
--------------------------------------------------------------------------------
/labs/README.adoc:
--------------------------------------------------------------------------------
1 | # What is this ?
2 |
3 | This is the lab section of the AMQ Streams workshop.
4 |
5 | Before starting, let's get familiar with the link:./environment.adoc[lab environment].
6 |
7 | # Outline
8 |
9 | . link:./0-to-60.adoc[AMQ Streams on OpenShift from 0 to 60]
10 |
11 | . link:./production-ready-topologies.adoc[Production-ready topologies: sizing and persistence]
12 |
13 | . link:./topic-management.adoc[Topic management]
14 |
15 | . link:./understanding-the-application-ecosystem.adoc[Understanding the Kafka application ecosystem]
16 |
17 | . link:./clients-within-outside-OCP.adoc[Clients: within OCP and outside OCP]
18 |
19 | . link:./security.adoc[Security]
20 |
21 | . link:./watching-multiple-namespaces.adoc[Watching multiple namespaces]
22 |
23 | . link:./mirror-maker.adoc[Replicating Data with MirrorMaker]
24 |
25 | . link:./kafka-connect.adoc[Running KafkaConnect applications]
26 |
27 | . link:./management-monitoring.adoc[Management and monitoring]
28 |
--------------------------------------------------------------------------------
/labs/clients-within-outside-OCP.adoc:
--------------------------------------------------------------------------------
1 | == Connecting from outside OCP
2 |
3 | As you have seen, a Kafka cluster deployed in OpenShift can be used by other applications deployed in the same OpenShift instance.
4 | This is the primary use case.
5 |
6 | === Configuring external routes
7 |
8 | In certain scenarios, a Kafka cluster deployed in OpenShift may need to be accessed from outside the cluster.
9 | Let's see how that can be enabled.
10 |
11 | First, we need to reconfigure the cluster with an `external` listener.
12 |
13 | ----
14 | apiVersion: kafka.strimzi.io/v1alpha1
15 | kind: Kafka
16 | metadata:
17 | name: production-ready
18 | spec:
19 | kafka:
20 | replicas: 3
21 | listeners:
22 | plain: {}
23 | tls: {}
24 | external:
25 | type: route
26 | config:
27 | offsets.topic.replication.factor: 3
28 | transaction.state.log.replication.factor: 3
29 | transaction.state.log.min.isr: 2
30 | storage:
31 | type: persistent-claim
32 | size: 3Gi
33 | deleteClaim: false
34 | zookeeper:
35 | replicas: 3
36 | storage:
37 | type: persistent-claim
38 | size: 1Gi
39 | deleteClaim: false
40 | entityOperator:
41 | topicOperator: {}
42 | userOperator: {}
43 | ----
44 |
45 | Now let's apply this configuration to the running cluster.
46 |
47 | ----
48 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready-external-routes.yaml
49 | ----
50 |
51 | You can see that the Cluster Operator is starting a rolling update on the Kafka cluster, restarting each broker one by one for updating their configuration in order to expose the cluster outside of OpenShift.
52 | Now you can inspect the existing services.
53 | Notice that each of the brokers in the cluster has a route and a new bootstrap service has been created for external connections.
54 |
55 | ----
56 | oc get routes
57 | ----
58 |
59 | === Connecting external clients
60 |
61 | For interacting with the broker, external clients must use TLS.
62 | First, we need to extract the certificate of the server.
63 | ----
64 | oc extract secret/production-ready-cluster-ca-cert --keys=ca.crt --to=- >certificate.crt
65 | ----
66 |
67 | Then, we need to install it into a Java keystore.
68 |
69 | ----
70 | keytool -import -trustcacerts -alias root -file certificate.crt -keystore keystore.jks -storepass password -noprompt
71 | ----
72 |
73 | We can now run our producer and consumer applications by using this certificate.
74 |
75 | Let's download the two JARs.
76 |
77 | ----
78 | wget -O log-consumer.jar https://github.com/RedHatWorkshops/workshop-amq-streams/blob/master/bin/log-consumer.jar?raw=true
79 | wget -O timer-producer.jar https://github.com/RedHatWorkshops/workshop-amq-streams/blob/master/bin/timer-producer.jar?raw=true
80 | ----
81 |
82 | Let's launch the two applications with new configuration settings (replace with your workstation GUID):
83 |
84 | ----
85 | java -jar log-consumer.jar \
86 | --camel.component.kafka.configuration.brokers=production-ready-kafka-bootstrap-amq-streams.apps-.generic.opentlc.com:443 \
87 | --camel.component.kafka.configuration.security-protocol=SSL \
88 | --camel.component.kafka.configuration.ssl-truststore-location=keystore.jks \
89 | --camel.component.kafka.configuration.ssl-truststore-password=password
90 | ----
91 |
92 | ----
93 | java -jar timer-producer.jar \
94 | --camel.component.kafka.configuration.brokers=production-ready-kafka-bootstrap-amq-streams.apps-.generic.opentlc.com:443 \
95 | --camel.component.kafka.configuration.security-protocol=SSL \
96 | --camel.component.kafka.configuration.ssl-truststore-location=keystore.jks \
97 | --camel.component.kafka.configuration.ssl-truststore-password=password --server.port=0
98 | ----
99 |
100 | After observing how the two clients are exchaning messages from outside the OpenShift cluster, exit both the applications via `CTRL-C`.
--------------------------------------------------------------------------------
/labs/environment.adoc:
--------------------------------------------------------------------------------
1 | == Environment
2 |
3 | === Lab environment
4 |
5 | The lab environment provides a virtualized setup for developing and running Kafka applications on OpenShift.
6 | It consists of:
7 |
8 | * an OpenShift cluster with one master node and three worker nodes;
9 | * a workstation;
10 |
11 | During the lab you will interact with the OpenShift cluster via CLI from the workstation and with the OpenShift web console from the browser available on your station.
12 | Make note of the URLs that you will use during the lab.
13 |
14 | |===
15 | | Machine | URL | Access
16 | | Workstation | `workstation-.rhpds.opentlc.com` | SSH (port 22)
17 | | Master (OpenShift console) | `master00-.generic.opentlc.com` | HTTPS (port 8443)
18 | |===
19 |
20 | Every lab assumes that you have access to the workstation via SSH for CLI interaction and to the OpenShift web console.
21 |
22 | Before you start any lab, make sure you are logged into the workstation via SSH with the key that has been provided by the instructors.
23 |
24 | ssh -i cloud-user@workstation-.rhpds.opentlc.com
25 |
26 | For OpenShift access, the lab provides an `admin` user.
27 | It should be used for logging in both via CLI and web console.
28 | The password will be provided by the instructors.
29 |
30 | [NOTE]
31 | .Remember your GUID
32 | At the start of the lab you have been provided with a GUID, which uniquely identifies your lab environment.
33 | In the following instructions, replace the `` placeholder with your GUID.
34 | For example, if your GUID is `a2b3`, the URL `workstation-.example.com` becomes `workstation-a2b3.example.com`.
35 |
36 | Now, let's start the lab.
37 |
--------------------------------------------------------------------------------
/labs/kafka-connect.adoc:
--------------------------------------------------------------------------------
1 | === Kafka Connect
2 |
3 | We will start by deploying a KafkaConnect cluster.
4 |
5 | The configuration file for creating a KafkaConnect cluster named `connect-cluster` is https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/kafka-connect.yaml[here].
6 | Now let's apply it.
7 |
8 | ----
9 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/kafka-connect.yaml
10 | ----
11 |
12 | Watch for the creation of the KafkaConnect cluster.
13 | Besides the cluster deployment, a service named `connect-cluster-connect-api` is created.
14 | This is the KafkaConnect API that you can use to run tasks.
15 |
16 | By default, AMQ Streams includes support for a file sink and file source KafkaConnector(s).
17 | We will deploy a file sink that ingests the contents of the `lines` topic that the previous applications worked with into a file.
18 |
19 | Since the `connect-cluster-connect-api` needs to be accessed from within the cluster, we will need to invoke the restful API as follows:
20 |
21 | First, we will log into one of the Kafka machines:
22 |
23 | ----
24 | oc rsh production-ready-kafka-0
25 | ----
26 |
27 | Then, we will invoke the RESTful API of the deployed cluster:
28 |
29 | ----
30 | curl -X POST -H "Content-Type: application/json" --data '{"name": "local-file-sink", "config": {"connector.class":"FileStreamSinkConnector", "tasks.max":"1", "file":"/tmp/test.sink.txt", "topics":"lines", "value.converter.schemas.enable" : "false", "value.converter" : "org.apache.kafka.connect.storage.StringConverter", "value.converter.schemas.enable" : "false", "key.converter" : "org.apache.kafka.connect.storage.StringConverter", "key.converter.schemas.enable" : "false"}}' http://connect-cluster-connect-api.amq-streams.svc:8083/connectors
31 | ----
32 |
33 | We will expect a response similar to:
34 |
35 | ----
36 | {"name":"local-file-sink","config":{"connector.class":"FileStreamSinkConnector","tasks.max":"1","file":"/tmp/test.sink.txt","topics":"lines","value.converter.schemas.enable":"false","value.converter":"org.apache.kafka.connect.storage.StringConverter","key.converter":"org.apache.kafka.connect.storage.StringConverter","key.converter.schemas.enable":"false","name":"local-file-sink"},"tasks":[{"connector":"local-file-sink","task":0}],"type":null}
37 | ----
38 |
39 | Now the job `local-file-sink` is configured to ingest data from the `lines` topic in our cluster.
40 |
41 | Let's check that the job works.
42 | Since the job will ingest data into a local file, we need to log into the KafkaConnect cluster pod.
43 |
44 | ----
45 | oc get pods | grep connect-cluster
46 | ----
47 |
48 | The output should look similar to:
49 |
50 | ----
51 | connect-cluster-connect-5f8d958d8d-78hvz 1/1 Running 5 17h
52 | ----
53 |
54 | Log into the pod:
55 |
56 | ----
57 | oc rsh connect-cluster-connect-5f8d958d8d-78hvz
58 | ----
59 |
60 | Now let's list the contents of the file:
61 |
62 | ----
63 | sh-4.2$ more /tmp/test.sink.txt
64 | ----
65 |
66 | The output should be the dates emitted by the `timer-producer`, in String format, e.g.:
67 |
68 | ----
69 | Message 72 at Tue Feb 05 17:01:45 UTC 2019
70 | Message 73 at Tue Feb 05 17:01:50 UTC 2019
71 | Message 74 at Tue Feb 05 17:01:55 UTC 2019
72 | Message 75 at Tue Feb 05 17:02:00 UTC 2019
73 | Message 76 at Tue Feb 05 17:02:05 UTC 2019
74 | Message 77 at Tue Feb 05 17:02:10 UTC 2019
75 | Message 78 at Tue Feb 05 17:02:15 UTC 2019
76 | Message 79 at Tue Feb 05 17:02:20 UTC 2019
77 | Message 80 at Tue Feb 05 17:02:25 UTC 2019
78 | Message 81 at Tue Feb 05 17:02:30 UTC 2019
79 | ----
80 |
--------------------------------------------------------------------------------
/labs/management-monitoring.adoc:
--------------------------------------------------------------------------------
1 | == Management and monitoring
2 |
3 | This lab will be focused on:
4 |
5 | * Instrumenting a cluster to support metrics
6 | * Deploying Prometheus and Grafana
7 | * Importing prebuilt Grafana dashboard definitions for Kafka and ZooKeeper.
8 |
9 | While OpenShift uses Grafana and Prometheus internally, we need to use separate Prometheus and Grafana instances for our applications.
10 | For this exercise, we will install our own instance of Prometheus and Grafana for Kafka monitoring in the same namespace as the Kafka cluster.
11 | In this case it is 'amq-streams'.
12 |
13 |
14 | === Instrumenting the Kafka cluster
15 |
16 | We will reconfigure the `production-ready` cluster to advertise metrics.
17 |
18 | ----
19 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready-monitored.yaml
20 | ----
21 |
22 | === Installing Prometheus
23 |
24 | To install Prometheus, we use the instructions of the Strimzi labs: https://strimzi.io/docs/0.11.1/#kafka_metrics_configuration
25 |
26 | ----
27 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/metrics/prometheus.yaml
28 | ----
29 |
30 | Then we create a route for the Prometheus service.
31 | Make sure your project is set to 'amq-streams'.
32 |
33 | ----
34 | oc expose svc prometheus
35 | ----
36 |
37 | === Install Grafana
38 |
39 | ----
40 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/metrics/grafana.yaml
41 | ----
42 |
43 | Then we create a route for the Grafana service.
44 | Make sure your project is set to 'amq-streams'.
45 |
46 | ----
47 | oc expose svc grafana
48 | ----
49 |
50 | Now, let's configure the Grafana dashboard.
51 | Get the route:
52 |
53 | ----
54 | oc get routes
55 | ----
56 |
57 | It should be similar to : "http://grafana-amq-streams.apps-.generic.opentlc.com"
58 |
59 | Login as `admin`/`admin` to the Grafana dashboard on your browser with the URL from above.
60 | You can also change the admin password if you'd like.
61 |
62 | === Setting up Grafana to monitor Kafka
63 |
64 | Create the Prometheus datasource.
65 | Go to the `Gear' icon on the left and then pick *Datasources*.
66 |
67 | Choose Prometheus as data source type and then fill following values:
68 |
69 | name: prometheus
70 | url: `http://prometheus-amq-streams.apps-.generic.opentlc.com`
71 |
72 | Save and Test.
73 | You should see a 'data source tested' message.
74 |
75 | Import Dashboard definitions.
76 | Go to the + on the top left and choose "import".
77 | Copy the JSON from this location `https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/0.11.1/metrics/examples/grafana/strimzi-kafka.json`
78 | Pick prometheus as the datasource and finish
79 | You should see the dashboard with data like in the screenshots.
80 |
81 | Similarly to create kafka-zookeeper dashboards, import the corresponding definition.
82 | Go to the + on the top left and choose "import".
83 | Copy the JSON from this location `https://raw.githubusercontent.com/strimzi/strimzi-kafka-operator/0.11.1/metrics/examples/grafana/strimzi-zookeeper.json`
84 | Pick prometheus as the datasource and finish.
85 | You should see the dashboard with data like in the screenshots.
86 |
--------------------------------------------------------------------------------
/labs/mirror-maker-single-namespace.adoc:
--------------------------------------------------------------------------------
1 | == MirrorMaker
2 |
3 | This lab walks through setting up MirrorMaker for replicating messages between different clusters.
4 |
5 | === What does MirrorMaker do?
6 |
7 | Often, applications need to communicate between each other across Kafka clusters.
8 | For example, data might be ingested in Kafka in a data center and consumed in another data center, for reasons such as locality.
9 | In this lab we will show how data can be replicated between Kafka clusters using MirrorMaker.
10 |
11 | First of all, if the `timer-producer` and `log-consumer` applications are still running, let's stop them.
12 |
13 | ----
14 | oc delete deployment timer-producer
15 | oc delete deployment log-consumer
16 | ----
17 |
18 | Check on the OpenShift Web console that the related pods are not running anymore.
19 |
20 | === Setting up the source and target clusters
21 |
22 | We will use the cluster previously created in this workshop as source but without the authentication enabled.
23 | In order to do that, let's update the already running `production-ready` cluster actually removing the authentication.
24 |
25 | ----
26 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml
27 | ----
28 |
29 | We will use this cluster as source of the mirroring, having the `timer-producer` application sending messages to the `lines` topic.
30 |
31 | Let's deploy another `production-ready-target` cluster as target of the mirroring from where the `log-consumer` application will read messages.
32 |
33 | ----
34 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready-target.yaml
35 | ----
36 |
37 | Wait for the new cluster to be deployed.
38 |
39 | Now, because the `timer-producer` application will write on the already existing `lines` topic on `production-ready` cluster, let's create a corresponding `lines` topic on the `production-ready-target` cluster as destination of the mirroring, from where the `log-consumer` application will read messages.
40 |
41 | ----
42 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/topics/lines-10-target.yaml
43 | ----
44 |
45 | Now let's deploy MirrorMaker.
46 |
47 | ----
48 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/mirror-maker-single-namespace.yaml
49 | ----
50 |
51 | The notions of producer and consumer are from MirrorMaker's perspective.
52 | Messages will be read from the producer (in MirrorMaker config) and published to consumer.
53 |
54 | Now let's deploy the `log-consumer` application reading from the target cluster.
55 |
56 | ----
57 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer-target.yaml
58 | ----
59 |
60 | And finally the `timer-producer` application writing to the source cluster.
61 |
62 | ----
63 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/timer-producer.yaml
64 | ----
65 |
66 | Logging the related pods should yield the expected results and data flows between systems.
67 |
--------------------------------------------------------------------------------
/labs/mirror-maker.adoc:
--------------------------------------------------------------------------------
1 | == MirrorMaker
2 |
3 | This lab walks through setting up MirrorMaker for replicating messages between different clusters.
4 |
5 | === What does MirrorMaker do?
6 |
7 | Often, applications need to communicate between each other across Kafka clusters.
8 | For example, data might be ingested in Kafka in a data center and consumed in another data center, for reasons such as locality.
9 | In this lab we will show how data can be replicated between Kafka clusters using MirrorMaker.
10 |
11 | First, we will change the `log-consumer` app.
12 |
13 | Let's log back as `admin`.
14 |
15 | ----
16 | oc login -u admin
17 | ----
18 |
19 | ----
20 | oc project amq-streams
21 | ----
22 |
23 | ----
24 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/timer-producer-team-1.yaml
25 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer-team-2.yaml
26 | ----
27 |
28 | Now the `log-consumer` application consumes data from cluster `team-2`, while `timer-producer` sends data to cluster `team-1`.
29 | If we look at `log-consumer`s output, we will see that no data is received.
30 |
31 | We can confirm that pointing the application to the cluster `team-1` will yield data.
32 | ----
33 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer-team-1.yaml
34 | ----
35 |
36 | === Setting up the source and target clusters
37 |
38 | We will use the clusters previously created in this workshop in the `team-1` and `team-2` namespaces as source and target clusters.
39 |
40 | Make sure that you're still logged in as `admin`.
41 |
42 | ----
43 | oc login -u admin
44 | oc project amq-streams
45 | ----
46 |
47 | Now let's deploy MirrorMaker.
48 |
49 | ----
50 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/mirror-maker.yaml
51 | ----
52 |
53 | The notions of producer and consumer are from MirrorMaker's perspective.
54 | Messages will be read by the consumer (in MirrorMaker config) from the source cluster and published by the publisher (in MirrorMaker config) to the target cluster.
55 |
56 |
57 | Go to the Openshift web console.
58 | Select the `team-2` project.
59 | Navigate to *Applications*->*Pods* and pick production-ready-kafka-0.
60 | Run the following command.
61 |
62 | ----
63 | ./bin/kafka-topics.sh --list --zookeeper localhost:2181
64 | ----
65 |
66 | This will confirm that the replicated topic `lines` was created automatically.
67 |
68 | Now let's deploy `log-consumer` against the cluster in `team-2`:
69 |
70 | ----
71 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer-team-2.yaml
72 | ----
73 |
74 | Logging the pod again should yield the expected results and data flows between systems.
75 |
--------------------------------------------------------------------------------
/labs/production-ready-topologies.adoc:
--------------------------------------------------------------------------------
1 | == Production-ready topologies: sizing and persistence
2 |
3 | In the previous module, we have set up a simple Apache Kafka cluster.
4 | It is suitable for development and experimentation, but not for a production setting.
5 | In this module you will learn how to use AMQ Streams for configuring a Kafka cluster for production usage, in particular sizing and persistence settings.
6 |
7 | === Some words on size and persistence
8 |
9 | Configuring your Kafka cluster depends on a number of factors, and in particular of your development and production requirements for throughput and scalability, but there are a few principles to observe:
10 |
11 | * Making sure that you deploy at least 3 Kafka brokers for scaling and HA
12 | * Making sure that topics are replicated on at least two nodes
13 | * Making sure that your Zookeeper ensemble has at least 3 nodes
14 | * Making sure that the data of your Kafka cluster is persistent
15 |
16 | Our ephemeral cluster created in the previous module meets some but not all of these criteria: it has the minimum number of nodes for both Kafka and Zookeeper, but it is not persistent.
17 | Plus, we really had no control over sizing - we just used a default!
18 |
19 | === Configuring the size of your Kafka cluster deployments
20 |
21 | AMQ Streams treats an Apache Kafka cluster deployment as a custom resource and controls the configuration of an Apache Kafka cluster through a Custom Resource Definition (CRD).
22 | Let's take a quick look at the resource descriptor for the cluster we just deployed:
23 |
24 | ----
25 | apiVersion: kafka.strimzi.io/v1alpha1
26 | kind: Kafka
27 | metadata:
28 | name: simple-cluster
29 | spec:
30 | kafka:
31 | replicas: 1
32 | listeners:
33 | plain: {}
34 | tls: {}
35 | config:
36 | offsets.topic.replication.factor: 1
37 | transaction.state.log.replication.factor: 1
38 | transaction.state.log.min.isr: 1
39 | storage:
40 | type: ephemeral
41 | zookeeper:
42 | replicas: 1
43 | storage:
44 | type: ephemeral
45 | entityOperator:
46 | topicOperator: {}
47 | userOperator: {}
48 | ----
49 |
50 | Let's take a look at a few of the cluster properties.
51 | The cluster has:
52 |
53 | * a name: `metadata.name: simple-cluster`
54 | * a number of kafka replicas: `spec.kafka.replicas: 1`
55 | * storage configuration for the Kafka cluster: `spec.kafka.storage: ephemeral`
56 | * configurations for listeners and configuration parameters for the brokers (e.g. replication settings)
57 | * a number of zookeeper replicas: `spec.zookeeper.replicas: 1`
58 | * storage configuration for the ZooKeeper cluster: `spec.zookeeper.storage: ephemeral`
59 |
60 | === Configuring a cluster of a given size and persistence
61 |
62 | While a single-broker ephemeral cluster is good enough for development and experimentation, it is definitely not a good option for production and staging workloads.
63 | A healthy cluster must have more than one broker for HA and throughput, and must persist its data on a persistent filesystem.
64 |
65 | First let's check if the previously deployed cluster is still there.
66 |
67 | ----
68 | oc get kafka
69 | ----
70 |
71 | You might get an empty result, but in case you don't, just remove the existing cluster.
72 |
73 | ----
74 | oc delete kafka simple-cluster
75 | ----
76 |
77 | Now let's create a new resource.
78 | The definition of the cluster is below:
79 |
80 | ----
81 | apiVersion: kafka.strimzi.io/v1alpha1
82 | kind: Kafka
83 | metadata:
84 | name: production-ready
85 | spec:
86 | kafka:
87 | replicas: 3
88 | listeners:
89 | plain: {}
90 | tls: {}
91 | config:
92 | offsets.topic.replication.factor: 3
93 | transaction.state.log.replication.factor: 3
94 | transaction.state.log.min.isr: 2
95 | storage:
96 | type: persistent-claim
97 | size: 3Gi
98 | deleteClaim: false
99 | zookeeper:
100 | replicas: 3
101 | storage:
102 | type: persistent-claim
103 | size: 1Gi
104 | deleteClaim: false
105 | entityOperator:
106 | topicOperator: {}
107 | userOperator: {}
108 | ----
109 |
110 | Let's deploy this new resource.
111 |
112 | ----
113 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml
114 | ----
115 |
116 | The cluster `production-ready` has the minimal settings for a highly available, persistent cluster.
117 |
118 | * 3 Kafka broker nodes - this is the minimum recommended number for a production deployment
119 | * 3 ZooKeeper nodes - this is the minimum recommended number for a production deployments
120 | * `persistent-claim` storage that ensures that persistent volumes are allocated to Kafka and ZooKeeper instances
121 |
122 | Note that while the minimum number is 3 for both, the number of Kafka brokers is likely to be different from the number of ZooKeeper nodes.
123 | The latter may be less than the number of Kafka brokers.
124 | Other settings control the replication factors for offsets and transaction logs.
125 | A minimum of two is recommended.
126 |
127 | ==== Scaling up the cluster
128 |
129 | Let us scale up the cluster.
130 | A corresponding resource would look like below (note that the only property that changes is `spec.kafka.replicas`).
131 |
132 | ----
133 | apiVersion: kafka.strimzi.io/v1alpha1
134 | kind: Kafka
135 | metadata:
136 | name: production-ready
137 | spec:
138 | kafka:
139 | replicas: 5
140 | listeners:
141 | plain: {}
142 | tls: {}
143 | config:
144 | offsets.topic.replication.factor: 3
145 | transaction.state.log.replication.factor: 3
146 | transaction.state.log.min.isr: 2
147 | storage:
148 | type: persistent-claim
149 | size: 3Gi
150 | deleteClaim: false
151 | zookeeper:
152 | replicas: 3
153 | storage:
154 | type: persistent-claim
155 | size: 1Gi
156 | deleteClaim: false
157 | entityOperator:
158 | topicOperator: {}
159 | userOperator: {}
160 | ----
161 |
162 | Notice the only change being the number of nodes.
163 | Let's apply this new configuration:
164 |
165 | ----
166 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready-5-nodes.yaml
167 | ----
168 |
169 | Notice the number of pods of the Kafka cluster increasing to 5 and the corresponding persistent claims.
170 | Now let's scale down the cluster again.
171 |
172 | ----
173 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml
174 | ----
175 |
176 | Notice the number of pods of the Kafka cluster decreasing back to 3.
177 | The persistent claims for nodes 3 and 4 are still active.
178 | What does this mean?
179 | Let's scale up the cluster again.
180 |
181 | ----
182 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready-5-nodes.yaml
183 | ----
184 |
185 | Notice the number of pods increasing back to 5 and the corresponding persistent volume claims being reallocated to the existing nodes.
186 | This means that the newly started instances will resume from where the previous instances 3 and 4 left off.
187 |
188 | Three broker nodes will be sufficient for our lab, so we can scale things down again:
189 |
190 | ----
191 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml
192 | ----
193 |
--------------------------------------------------------------------------------
/labs/security.adoc:
--------------------------------------------------------------------------------
1 | == Security for clusters and topics
2 |
3 | In this module we will show you how to secure a cluster and how to create and manage users.
4 |
5 | === Securing listeners
6 |
7 | The first step for securing a Kafka cluster is securing its listeners.
8 | You can add security options to each of the configured listeners.
9 | For example, let us change the cluster definition for the plain listener.
10 |
11 | ----
12 | apiVersion: kafka.strimzi.io/v1alpha1
13 | kind: Kafka
14 | metadata:
15 | name: production-ready
16 | spec:
17 | kafka:
18 | replicas: 3
19 | listeners:
20 | plain:
21 | authentication:
22 | type: scram-sha-512
23 | tls: {}
24 | config:
25 | offsets.topic.replication.factor: 3
26 | transaction.state.log.replication.factor: 3
27 | transaction.state.log.min.isr: 2
28 | storage:
29 | type: persistent-claim
30 | size: 20Gi
31 | deleteClaim: false
32 | zookeeper:
33 | replicas: 3
34 | storage:
35 | type: persistent-claim
36 | size: 1Gi
37 | deleteClaim: false
38 | entityOperator:
39 | topicOperator: {}
40 | userOperator: {}
41 | EOF
42 | ----
43 |
44 | Let's deploy the new configuration:
45 |
46 | ----
47 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready-secured.yaml
48 | ----
49 |
50 | Watch for the changes in the stateful set.
51 | Once all pods have been restarted, you can proceed.
52 | The `plain` listener is now configured to use the `SCRAM-SHA-512` challenge mechanism for connecting clients.
53 |
54 | === Creating users and ACLs
55 |
56 | Now that we have configured the broker to be secured, we need to create users so that our clients can connect.
57 | Users are managed through `KafkaUser` resources, which also manage the user authorization.
58 | Let's create our first user.
59 |
60 | ----
61 | apiVersion: kafka.strimzi.io/v1alpha1
62 | kind: KafkaUser
63 | metadata:
64 | name: secure-topic-reader
65 | labels:
66 | strimzi.io/cluster: production-ready
67 | spec:
68 | authentication:
69 | type: scram-sha-512
70 | authorization:
71 | type: simple
72 | acls:
73 | # Example consumer Acls for topic lines using consumer group
74 | - resource:
75 | type: topic
76 | name: lines
77 | patternType: literal
78 | operation: Read
79 | host: "*"
80 | - resource:
81 | type: topic
82 | name: lines
83 | patternType: literal
84 | operation: Describe
85 | host: "*"
86 | - resource:
87 | type: group
88 | name: secure-group
89 | patternType: literal
90 | operation: Read
91 | host: "*"
92 | ----
93 |
94 | Let's apply this new configuration.
95 |
96 | ----
97 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/users/secure-topic-reader.yaml
98 | ----
99 |
100 | The newly created user can read the metadata of topic `lines` and consume (read) from it with the consumer group `secure-group`.
101 |
102 | But now we need a user that can produce data to `lines`!
103 | Let's create a new resource:
104 |
105 | ----
106 | aapiVersion: kafka.strimzi.io/v1alpha1
107 | kind: KafkaUser
108 | metadata:
109 | name: secure-topic-writer
110 | labels:
111 | strimzi.io/cluster: production-ready
112 | spec:
113 | authentication:
114 | type: scram-sha-512
115 | authorization:
116 | type: simple
117 | acls:
118 | # Example Producer Acls for topic lines
119 | - resource:
120 | type: topic
121 | name: lines
122 | patternType: literal
123 | operation: Write
124 | host: "*"
125 | ----
126 |
127 | And let's apply this new configuration.
128 | ----
129 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/users/secure-topic-writer.yaml
130 | ----
131 |
132 | Go to `Secrets` and observe that new secret named `secure-topic-reader` and `secure-topic-writer` have been created.
133 | Both secrets have a field named `password`.
134 |
135 | Now let's redeploy our running applications by running on the OpenShift cluster.
136 |
137 | ----
138 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/timer-producer.yaml
139 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer.yaml
140 | ----
141 |
142 | Looking at the logs, we see a lot of errors - the clients cannot connect anymore.
143 |
144 | We need to reconfigure the running apps:
145 | ----
146 | apiVersion: extensions/v1beta1
147 | kind: Deployment
148 | metadata:
149 | name: timer-producer
150 | labels:
151 | app: kafka-workshop
152 | spec:
153 | replicas: 1
154 | template:
155 | metadata:
156 | labels:
157 | app: kafka-workshop
158 | name: timer-producer
159 | spec:
160 | containers:
161 | - name: timer-producer
162 | image: docker.io/mbogoevici/timer-producer:latest
163 | env:
164 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
165 | value: "production-ready-kafka-bootstrap.amq-streams.svc:9092"
166 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SASL_JAAS_CONFIG
167 | value: org.apache.kafka.common.security.scram.ScramLoginModule required username='${KAFKA_USER}' password='${KAFKA_PASSWORD}';
168 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SASL_MECHANISM
169 | value: SCRAM-SHA-512
170 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SECURITY_PROTOCOL
171 | value: SASL_PLAINTEXT
172 | - name: KAFKA_USER
173 | value: secure-topic-writer
174 | - name: KAFKA_PASSWORD
175 | valueFrom:
176 | secretKeyRef:
177 | key: password
178 | name: secure-topic-writer
179 | ----
180 |
181 | Now let's deploy this new configuration.
182 |
183 | ----
184 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/timer-producer-secured.yaml
185 | ----
186 |
187 | We need to secure the `log-consumer` application as well:
188 |
189 | ----
190 | apiVersion: extensions/v1beta1
191 | kind: Deployment
192 | metadata:
193 | name: log-consumer
194 | labels:
195 | app: kafka-workshop
196 | spec:
197 | replicas: 1
198 | template:
199 | metadata:
200 | labels:
201 | app: kafka-workshop
202 | name: log-consumer
203 | spec:
204 | containers:
205 | - name: log-consumer
206 | image: docker.io/mbogoevici/log-consumer:latest
207 | env:
208 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
209 | value: "production-ready-kafka-bootstrap.dev-team-1.svc:9092"
210 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_GROUP_ID
211 | value: secure-group
212 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SASL_JAAS_CONFIG
213 | value: org.apache.kafka.common.security.scram.ScramLoginModule required username='${KAFKA_USER}' password='${KAFKA_PASSWORD}';
214 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SASL_MECHANISM
215 | value: SCRAM-SHA-512
216 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_SECURITY_PROTOCOL
217 | value: SASL_PLAINTEXT
218 | - name: KAFKA_USER
219 | value: secure-topic-reader
220 | - name: KAFKA_PASSWORD
221 | valueFrom:
222 | secretKeyRef:
223 | key: password
224 | name: secure-topic-reader
225 | ----
226 |
227 | Let's apply this new configuration:
228 |
229 | ----
230 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer-secured.yaml
231 | ----
232 |
233 | Inspect the log of `log-consumer` again.
234 | You should see the messages being exchanged.
235 |
--------------------------------------------------------------------------------
/labs/topic-management.adoc:
--------------------------------------------------------------------------------
1 | == Topic Management
2 |
3 | In the next module we will focus on creating and managing topics.
4 | In AMQ Streams, topics are managed through a KafkaTopic resource.
5 |
6 | === Creating a topics
7 |
8 | Let's create a topic.
9 | The structure of a `KafkaTopic` custom resource is outlined below.
10 |
11 | ----
12 | apiVersion: kafka.strimzi.io/v1alpha1
13 | kind: KafkaTopic
14 | metadata:
15 | name: lines
16 | labels:
17 | strimzi.io/cluster: production-ready
18 | spec:
19 | partitions: 2
20 | replicas: 2
21 | config:
22 | retention.ms: 7200000
23 | segment.bytes: 1073741824
24 | ----
25 |
26 | Notice a few important attributes:
27 |
28 | * `metadata.name` which is the name of the topic
29 | * `metadata.labels[strimzi.io/cluster]` which is the target cluster for the topic (remember that you could have more) than one cluster in the same namespace;
30 | * `spec.partitions` which is the partition count of the topic
31 | * `spec.replicas` which is the number of replicas per partition
32 | * `spec.config` which contains miscellaneous configuration options, such as retention time and segment size
33 |
34 | Now let's deploy the `KafkaTopic` into our current project:
35 |
36 | ----
37 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/topics/lines.yaml
38 | ----
39 |
40 | Let's wait a number of seconds and let's try to check that the topic has been created and log into one of the Kafka pods.
41 | You can use the command below or use the web console and the Terminal tab.
42 |
43 | ----
44 | oc rsh production-ready-kafka-0
45 | ----
46 |
47 | Let's get the topic information.
48 |
49 | ----
50 | bin/kafka-topics.sh --zookeeper localhost:2181 --topic lines --describe
51 | ----
52 |
53 | The expected result should look like:
54 |
55 | ----
56 | Topic:lines PartitionCount:2 ReplicationFactor:2 Configs:segment.bytes=1073741824,retention.ms=7200000
57 | Topic: lines Partition: 0 Leader: 2 Replicas: 3,2 Isr: 2
58 | Topic: lines Partition: 1 Leader: 0 Replicas: 0,3 Isr: 0,3
59 | ----
60 |
61 | Exit the container shell.
62 |
63 | Now let's increase the number of partitions.
64 | First, we need to amend the resource.
65 |
66 | ----
67 | apiVersion: kafka.strimzi.io/v1alpha1
68 | kind: KafkaTopic
69 | metadata:
70 | name: test-topic
71 | labels:
72 | strimzi.io/cluster: production-ready
73 | spec:
74 | partitions: 10
75 | replicas: 2
76 | config:
77 | retention.ms: 14400000
78 | segment.bytes: 1073741824
79 | ----
80 |
81 | Next, we will apply the new configuration.
82 | ----
83 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/topics/lines-10.yaml
84 | ----
85 |
86 | And let's get the topic information again.
87 | The topic has now 10 partitions and also note the increased retention time.
88 |
89 | ----
90 | oc rsh production-ready-kafka-0
91 |
92 | bin/kafka-topics.sh --zookeeper localhost:2181 --topic lines --describe
93 | OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
94 | Topic:lines PartitionCount:10 ReplicationFactor:2 Configs:segment.bytes=1073741824,retention.ms=14400000
95 | Topic: lines Partition: 0 Leader: 2 Replicas: 3,2 Isr: 2
96 | Topic: lines Partition: 1 Leader: 0 Replicas: 0,3 Isr: 0
97 | Topic: lines Partition: 2 Leader: 2 Replicas: 2,0 Isr: 2,0
98 | Topic: lines Partition: 3 Leader: 0 Replicas: 0,2 Isr: 0,2
99 | Topic: lines Partition: 4 Leader: 1 Replicas: 1,0 Isr: 1,0
100 | Topic: lines Partition: 5 Leader: 2 Replicas: 2,1 Isr: 2,1
101 | Topic: lines Partition: 6 Leader: 0 Replicas: 0,1 Isr: 0,1
102 | Topic: lines Partition: 7 Leader: 1 Replicas: 1,2 Isr: 1,2
103 | Topic: lines Partition: 8 Leader: 2 Replicas: 2,0 Isr: 2,0
104 | Topic: lines Partition: 9 Leader: 0 Replicas: 0,2 Isr: 0,2
105 | ----
106 |
107 | We can now exit the container.
108 |
109 | ----
110 | exit
111 | ----
112 |
113 | Now that the topic has been created, we can start running some applications.
114 |
--------------------------------------------------------------------------------
/labs/understanding-the-application-ecosystem.adoc:
--------------------------------------------------------------------------------
1 | == Understanding the application ecosystem
2 |
3 | For illustrating the different features of AMQ Streams, we will use a consumer application and a producer application based on the Kafka consumer and producer API.
4 | The source code of the applications is available in GitHub.
5 | In this example we use containerized versions of the applications and we will monitor the impact.
6 |
7 | Let's connect the applications to the production cluster we just created.
8 |
9 | First, we deploy the producer that will periodically emit messages on a topic named `lines`.
10 | Note the `CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS` environment variable that sets the bootstrap broker(s) for the application.
11 | The descriptor for the application is below.
12 |
13 | ----
14 | apiVersion: extensions/v1beta1
15 | kind: Deployment
16 | metadata:
17 | name: timer-producer
18 | labels:
19 | app: kafka-workshop
20 | spec:
21 | replicas: 1
22 | template:
23 | metadata:
24 | labels:
25 | app: kafka-workshop
26 | name: timer-producer
27 | spec:
28 | containers:
29 | - name: timer-producer
30 | image: docker.io/mbogoevici/timer-producer:latest
31 | env:
32 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
33 | value: "production-ready-kafka-bootstrap.amq-streams.svc:9092"
34 | ----
35 |
36 | Next, we deploy the application.
37 |
38 | ----
39 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/timer-producer.yaml
40 | ----
41 |
42 |
43 |
44 | We also have a simple application that consumes messages.
45 |
46 | ----
47 | apiVersion: extensions/v1beta1
48 | kind: Deployment
49 | metadata:
50 | name: log-consumer
51 | labels:
52 | app: kafka-workshop
53 | spec:
54 | replicas: 1
55 | template:
56 | metadata:
57 | labels:
58 | app: kafka-workshop
59 | name: log-consumer
60 | spec:
61 | containers:
62 | - name: log-consumer
63 | image: docker.io/mbogoevici/log-consumer:latest
64 | env:
65 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_BROKERS
66 | value: "production-ready-kafka-bootstrap.amq-streams.svc:9092"
67 | - name: CAMEL_COMPONENT_KAFKA_CONFIGURATION_GROUP_ID
68 | value: test-group
69 | ----
70 |
71 | Now, let's deploy the message consumer.
72 |
73 | ----
74 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer.yaml
75 | ----
76 |
77 | Make sure that everything works and all the applications are started, then we will validate that the applications communicate with each other.
78 | First, let's find the pod address for the consumer.
79 |
80 | ----
81 | oc get pods | grep log-consumer
82 | ----
83 |
84 | Wait until the container is running.
85 | The result should be something along the lines of (notice that the pod id will be different - copy it for the next step):
86 |
87 | ----
88 | log-consumer-5d4586bdcd-gm9xp 1/1 Running 0 41s
89 | ----
90 |
91 | Now let's tail the logs from the consumer (use the pod id retrieved in the previous step).
92 |
93 | ----
94 | oc logs -f pod/log-consumer- -c log-consumer --tail=-1
95 | ----
96 |
97 | We expect the output to look like as follows:
98 |
99 | ----
100 | 2019-02-05 16:29:05.833 INFO 1 --- [Consumer[lines]] route1 : Message 209 at Tue Feb 05 16:29:05 UTC 2019
101 | 2019-02-05 16:29:10.835 INFO 1 --- [Consumer[lines]] route1 : Message 210 at Tue Feb 05 16:29:10 UTC 2019
102 | 2019-02-05 16:29:15.835 INFO 1 --- [Consumer[lines]] route1 : Message 211 at Tue Feb 05 16:29:15 UTC 2019
103 | 2019-02-05 16:29:20.838 INFO 1 --- [Consumer[lines]] route1 : Message 212 at Tue Feb 05 16:29:20 UTC 2019
104 | 2019-02-05 16:29:25.833 INFO 1 --- [Consumer[lines]] route1 : Message 213 at Tue Feb 05 16:29:25 UTC 2019
105 | ----
106 |
107 | Messages should continue to arrive every five seconds, and this indicates that the two applications communicate with each other.
108 |
109 | Now let's delete the two applications.
110 |
111 | ----
112 | oc delete deployment log-consumer
113 | oc delete deployment timer-producer
114 | ----
115 |
--------------------------------------------------------------------------------
/labs/watching-multiple-namespaces-short-1.1.adoc:
--------------------------------------------------------------------------------
1 | == Separating namespace: watching multiple namespaces
2 |
3 | In the examples we've seen so far, the cluster operator as well as the Kafka cluster, topic and user CRDs were deployed in the same project/namespace.
4 | AMQ Streams allows the cluster operator to monitor a set of separate namespaces, which could be useful for teams that want to manage their own individual clusters.
5 | Each namespace is independent of the other, and resources created in each of them will result in the creation of clusters, topics and users in that specific namespace.
6 |
7 | === Creating the namespaces
8 |
9 | Let's set up two separate projects: `team-1` and `team-2`.
10 |
11 | Execute the following commands:
12 |
13 | ----
14 | oc new-project team-1
15 | ----
16 |
17 | and
18 |
19 | ----
20 | oc new-project team-2
21 | ----
22 |
23 | For this lab, we will still deploy the cluster operator in the `amq-streams` project.
24 | If you've already done so, we will reconfigure it.
25 | So let's revert to the original project `amq-streams`.
26 |
27 | ----
28 | oc project amq-streams
29 | ----
30 |
31 | === Configuring the cluster operator to watch multiple namespaces
32 |
33 | First, update the configuration files to reference the target namespace of the cluster operator.
34 | If you've already done so in the first lab, skip this step.
35 |
36 | ----
37 | sed -i 's/namespace: .*/namespace: amq-streams/' install/cluster-operator/*RoleBinding*.yaml
38 | ----
39 |
40 | Next, alter the file `install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml` to point to the monitored projects.
41 | You need do so by editing the file and changing the value of `STRIMZI_NAMESPACE` from its original form:
42 |
43 | ----
44 | ...
45 | env:
46 | - name: STRIMZI_NAMESPACE
47 | valueFrom:
48 | fieldRef:
49 | fieldPath: metadata.namespace
50 | ...
51 | ----
52 |
53 | to:
54 |
55 | ----
56 | ...
57 | env:
58 | - name: STRIMZI_NAMESPACE
59 | value: amq-streams,team-1,team-2
60 | ...
61 | ----
62 |
63 | In this lab, you can do so either by editing the file with `vim` or by running the following command:
64 |
65 | ----
66 | cat > install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml <<'EOF'
67 | apiVersion: extensions/v1beta1
68 | kind: Deployment
69 | metadata:
70 | name: strimzi-cluster-operator
71 | labels:
72 | app: strimzi
73 | strimzi.io/kind: cluster-operator
74 | spec:
75 | replicas: 1
76 | template:
77 | metadata:
78 | labels:
79 | name: strimzi-cluster-operator
80 | strimzi.io/kind: cluster-operator
81 | spec:
82 | serviceAccountName: strimzi-cluster-operator
83 | containers:
84 | - name: strimzi-cluster-operator
85 | image: registry.access.redhat.com/amq7/amq-streams-cluster-operator:1.1.0
86 | imagePullPolicy: IfNotPresent
87 | env:
88 | - name: STRIMZI_NAMESPACE
89 | value: amq-streams,team-1,team-2
90 | - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS
91 | value: "120000"
92 | - name: STRIMZI_OPERATION_TIMEOUT_MS
93 | value: "300000"
94 | - name: STRIMZI_DEFAULT_ZOOKEEPER_IMAGE
95 | value: registry.access.redhat.com/amq7/amq-streams-zookeeper:1.1.0-kafka-2.1.1
96 | - name: STRIMZI_KAFKA_IMAGES
97 | value: |
98 | 2.0.0=registry.access.redhat.com/amq7/amq-streams-kafka:1.1.0-kafka-2.0.0
99 | 2.1.1=registry.access.redhat.com/amq7/amq-streams-kafka:1.1.0-kafka-2.1.1
100 | - name: STRIMZI_KAFKA_CONNECT_IMAGES
101 | value: |
102 | 2.0.0=registry.access.redhat.com/amq7/amq-streams-kafka-connect:1.1.0-kafka-2.0.0
103 | 2.1.1=registry.access.redhat.com/amq7/amq-streams-kafka-connect:1.1.0-kafka-2.1.1
104 | - name: STRIMZI_KAFKA_CONNECT_S2I_IMAGES
105 | value: |
106 | 2.0.0=registry.access.redhat.com/amq7/amq-streams-kafka-connect-s2i:1.1.0-kafka-2.0.0
107 | 2.1.1=registry.access.redhat.com/amq7/amq-streams-kafka-connect-s2i:1.1.0-kafka-2.1.1
108 | - name: STRIMZI_KAFKA_MIRROR_MAKER_IMAGES
109 | value: |
110 | 2.0.0=registry.access.redhat.com/amq7/amq-streams-kafka-mirror-maker:1.1.0-kafka-2.0.0
111 | 2.1.1=registry.access.redhat.com/amq7/amq-streams-kafka-mirror-maker:1.1.0-kafka-2.1.1
112 | - name: STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
113 | value: registry.access.redhat.com/amq7/amq-streams-topic-operator:1.1.0
114 | - name: STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
115 | value: registry.access.redhat.com/amq7/amq-streams-user-operator:1.1.0
116 | - name: STRIMZI_DEFAULT_KAFKA_INIT_IMAGE
117 | value: registry.access.redhat.com/amq7/amq-streams-kafka-init:1.1.0
118 | - name: STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE
119 | value: registry.access.redhat.com/amq7/amq-streams-zookeeper-stunnel:1.1.0
120 | - name: STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE
121 | value: registry.access.redhat.com/amq7/amq-streams-kafka-stunnel:1.1.0
122 | - name: STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
123 | value: registry.access.redhat.com/amq7/amq-streams-entity-operator-stunnel:1.1.0
124 | - name: STRIMZI_LOG_LEVEL
125 | value: INFO
126 | livenessProbe:
127 | httpGet:
128 | path: /healthy
129 | port: 8080
130 | initialDelaySeconds: 10
131 | periodSeconds: 30
132 | readinessProbe:
133 | httpGet:
134 | path: /ready
135 | port: 8080
136 | initialDelaySeconds: 10
137 | periodSeconds: 30
138 | resources:
139 | limits:
140 | cpu: 1000m
141 | memory: 256Mi
142 | requests:
143 | cpu: 200m
144 | memory: 256Mi
145 | strategy:
146 | type: Recreate
147 | EOF
148 | ----
149 |
150 | Next, install `RoleBinding`s for each of the two monitored namespaces:
151 |
152 | ----
153 | oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n team-1
154 | oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n team-1
155 | oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n team-1
156 | ----
157 |
158 | and
159 |
160 | ----
161 | oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n team-2
162 | oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n team-2
163 | oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n team-2
164 | ----
165 |
166 | Finally, install - or re-install and reconfigure the `cluster-operator`.
167 |
168 | ----
169 | oc apply -f install/cluster-operator
170 | ----
171 |
172 | Now we can deploy clusters, topics and users in each of these namespaces.
173 | Use the console to monitor the result.
174 |
175 | Let's deploy a new cluster in the project `team-1` first (note the namespace reference in the `oc` command).
176 | ----
177 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml -n team-1
178 | ----
179 |
180 | From the OpenShift console, navigate to the `team-1` project and notice the new cluster, as well as service.
181 |
182 | Let's see that the cluster works.
183 |
184 | Reconfigure the `timer-producer` and `log-consumer` applications to use the new cluster.
185 |
186 | ----
187 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer-team-1.yaml
188 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/timer-producer-team-1.yaml
189 | ----
190 |
191 | Now let's deploy a second cluster in the project `team-2` (note again the namespace reference in the `oc` command).
192 |
193 | ----
194 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml -n team-2
195 | ----
196 |
197 | You should see the new cluster being created by observing the `team-2` project in the OpenShift console.
198 |
--------------------------------------------------------------------------------
/labs/watching-multiple-namespaces.adoc:
--------------------------------------------------------------------------------
1 | == Separating namespace: watching multiple namespaces and delegating administration roles
2 |
3 | In the examples we've seen so far, the cluster operator as well as the Kafka cluster, topic and user CRDs were deployed in the same project/namespace.
4 | AMQ Streams allows the cluster operator to monitor a set of separate namespaces, which could be useful for teams that want to manage their own individual clusters.
5 | Each namespace is independent of the other, and resources created in each of them will result in the creation of clusters, topics and users in that specific namespace.
6 |
7 | AMQ Streams also allows regular users to take AMQ Streams administration roles, removing the need for cluster administrator permissions for day to day operations.
8 |
9 | === Creating the namespaces
10 |
11 | Let's set up two separate projects: `team-1` and `team-2`.
12 |
13 | Execute the following commands:
14 |
15 | ----
16 | oc new-project team-1
17 | ----
18 |
19 | and
20 |
21 | ----
22 | oc new-project team-2
23 | ----
24 |
25 | For this lab, we will still deploy the cluster operator in the `amq-streams` project.
26 | If you've already done so, we will reconfigure it.
27 | So let's revert to the original project `amq-streams`.
28 |
29 | ----
30 | oc project amq-streams
31 | ----
32 |
33 | === Configuring the cluster operator to watch multiple namespaces
34 |
35 | First, update the configuration files to reference the target namespace of the cluster operator.
36 | If you've already done so in the first lab, skip this step.
37 |
38 | ----
39 | sed -i 's/namespace: .*/namespace: amq-streams/' install/cluster-operator/*RoleBinding*.yaml
40 | ----
41 |
42 | Next, alter the file `install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml` to point to the monitored projects.
43 |
44 | ----
45 | cat > install/cluster-operator/050-Deployment-strimzi-cluster-operator.yaml <<'EOF'
46 | apiVersion: extensions/v1beta1
47 | kind: Deployment
48 | metadata:
49 | name: strimzi-cluster-operator
50 | labels:
51 | app: strimzi
52 | spec:
53 | replicas: 1
54 | template:
55 | metadata:
56 | labels:
57 | name: strimzi-cluster-operator
58 | spec:
59 | serviceAccountName: strimzi-cluster-operator
60 | containers:
61 | - name: strimzi-cluster-operator
62 | image: registry.access.redhat.com/amqstreams-1/amqstreams10-clusteroperator-openshift:1.0.0
63 | imagePullPolicy: IfNotPresent
64 | env:
65 | - name: STRIMZI_NAMESPACE
66 | value: amq-streams,team-1,team-2
67 | - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS
68 | value: "120000"
69 | - name: STRIMZI_OPERATION_TIMEOUT_MS
70 | value: "300000"
71 | - name: STRIMZI_DEFAULT_ZOOKEEPER_IMAGE
72 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-zookeeper-openshift:1.0.0
73 | - name: STRIMZI_DEFAULT_KAFKA_IMAGE
74 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-kafka-openshift:1.0.0
75 | - name: STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE
76 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-kafkaconnect-openshift:1.0.0
77 | - name: STRIMZI_DEFAULT_KAFKA_CONNECT_S2I_IMAGE
78 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-kafkaconnects2i-openshift:1.0.0
79 | - name: STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE
80 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-topicoperator-openshift:1.0.0
81 | - name: STRIMZI_DEFAULT_USER_OPERATOR_IMAGE
82 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-useroperator-openshift:1.0.0
83 | - name: STRIMZI_DEFAULT_KAFKA_INIT_IMAGE
84 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-kafkainit-openshift:1.0.0
85 | - name: STRIMZI_DEFAULT_TLS_SIDECAR_ZOOKEEPER_IMAGE
86 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-zookeeperstunnel-openshift:1.0.0
87 | - name: STRIMZI_DEFAULT_TLS_SIDECAR_KAFKA_IMAGE
88 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-kafkastunnel-openshift:1.0.0
89 | - name: STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE
90 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-entityoperatorstunnel-openshift:1.0.0
91 | - name: STRIMZI_DEFAULT_KAFKA_MIRRORMAKER_IMAGE
92 | value: registry.access.redhat.com/amqstreams-1/amqstreams10-kafkamirrormaker-openshift:1.0.0
93 | - name: STRIMZI_LOG_LEVEL
94 | value: INFO
95 | livenessProbe:
96 | httpGet:
97 | path: /healthy
98 | port: 8080
99 | initialDelaySeconds: 10
100 | periodSeconds: 30
101 | readinessProbe:
102 | httpGet:
103 | path: /ready
104 | port: 8080
105 | initialDelaySeconds: 10
106 | periodSeconds: 30
107 | resources:
108 | limits:
109 | cpu: 1000m
110 | memory: 256Mi
111 | requests:
112 | cpu: 200m
113 | memory: 256Mi
114 | strategy:
115 | type: Recreate
116 | EOF
117 | ----
118 |
119 | Next, install `RoleBinding`s for each of the two monitored namespaces:
120 |
121 | ----
122 | oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n team-1
123 | oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n team-1
124 | oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n team-1
125 | ----
126 |
127 | and
128 |
129 | ----
130 | oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n team-2
131 | oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n team-2
132 | oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n team-2
133 | ----
134 |
135 | Finally, install - or re-install and reconfigure the `cluster-operator`.
136 |
137 | ----
138 | oc apply -f install/cluster-operator
139 | ----
140 |
141 | Now we can deploy clusters, topics and users in each of these namespaces.
142 | Use the console to monitor the result.
143 |
144 | Let's deploy
145 | ----
146 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml -n team-1
147 | ----
148 |
149 | From the OpenShift console, navigate to the `team-1` project and notice the new cluster, as well as service.
150 |
151 | Let's see that the cluster works.
152 |
153 | Reconfigure the `timer-producer` and `log-consumer` applications to use the new cluster.
154 |
155 | ----
156 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/log-consumer-team-1.yaml
157 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/applications/timer-producer-team-1.yaml
158 | ----
159 |
160 | Once the applications have restarted, navigate to the logs and you should see the messages flowing again.
161 | The applications deployed in the `amq-streams` namespace will interact with a Kafka cluster configured in the `team-1` namespace.
162 |
163 | === Strimzi Administrators
164 |
165 | So far, we have used a cluster administrator to set up and manage Kafka clusters and topics.
166 | AMQ Streams allows the assignment of administrative permissions to regular user for day-to-day operations, once the cluster operator has been installed.
167 |
168 | ==== Creating OpenShift users
169 |
170 | OpenShift allows different strategies for creating users.
171 | In this lab we will create simple users authenticated against the OpenShift configuration files.
172 |
173 | First, let's create the users:
174 | ----
175 | oc create user dev-team-1
176 | oc create user dev-team-2
177 | ----
178 |
179 | Now we need to assign identities.
180 | We need to log in directly into the master machine.
181 |
182 | ----
183 | ssh master00.example.com
184 | ----
185 |
186 | Change user to `root`.
187 | ----
188 | sudo -i
189 | ----
190 |
191 | Now let's update the password file for the newly created users.
192 | Use a password that you can remember for each user.
193 |
194 | ----
195 | htpasswd /etc/origin/master/htpasswd dev-team-1
196 | htpasswd /etc/origin/master/htpasswd dev-team-2
197 | ----
198 |
199 | Assign the two users to the previously created projects:
200 |
201 | ----
202 | oc adm policy add-role-to-user admin dev-team-1 -n team-1
203 | oc adm policy add-role-to-user admin dev-team-2 -n team-2
204 | ----
205 |
206 | Exit the `root` account and the remote shell into the `master00` machine.
207 |
208 | Log into one of the users and change current project:
209 |
210 | ----
211 | oc login -u dev-team-2 master00.example.com
212 | ----
213 |
214 | Change the cluster configuration:
215 |
216 | ----
217 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml
218 | ----
219 |
220 | You should see the operation failing with an error along the lines of `Error from server (Forbidden):`.
221 | Your user does not have permission to update the custom resources.
222 |
223 | To correct that, we will first create a `StrimziAdmin` cluster role that we can assign to users.
224 | Log in as `admin` and apply the roles.
225 |
226 | ----
227 | oc login -u admin master00.example.com
228 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/users/strimzi-admin.yaml
229 | ----
230 |
231 | Assign the cluster role to the newly created users.
232 |
233 | ----
234 | oc adm policy add-cluster-role-to-user strimzi-admin dev-team-1 dev-team-2
235 | ----
236 |
237 | Now log in again try to repeat the operation.
238 |
239 | ----
240 | oc login -u dev-team-2 master00.example.com
241 | oc apply -f https://raw.githubusercontent.com/RedHatWorkshops/workshop-amq-streams/master/configurations/clusters/production-ready.yaml
242 | ----
243 |
244 | You should see the new cluster being created.
245 |
--------------------------------------------------------------------------------
/rhsummit2019-full/README.adoc:
--------------------------------------------------------------------------------
1 | # Running Apache Kafka on OpenShift with AMQ streams
2 |
3 | This lab contains an expanded version of the "Running Apache Kafka on OpenShift with AMQ streams" workshop at the Red Hat Summit 2019.
4 |
5 | Before starting, let's get familiar with the link:./environment.adoc[lab environment].
6 |
7 | # Outline
8 |
9 |
10 | . link:../labs/0-to-60.adoc[AMQ Streams on OpenShift from 0 to 60]
11 |
12 | . link:../labs/production-ready-topologies.adoc[Production-ready topologies: sizing and persistence]
13 |
14 | . link:../labs/topic-management.adoc[Topic management]
15 |
16 | . link:../labs/understanding-the-application-ecosystem.adoc[Understanding the Kafka application ecosystem]
17 |
18 | . link:../labs/clients-within-outside-OCP.adoc[Clients: within OCP and outside OCP]
19 |
20 | . link:../labs/security.adoc[Security]
21 |
22 | . link:../labs/watching-multiple-namespaces-short-1.1.adoc[Watching multiple namespaces]
23 |
24 | . link:../labs/mirror-maker.adoc[Replicating Data with MirrorMaker]
25 |
26 | . link:../labs/kafka-connect.adoc[Running KafkaConnect applications]
27 |
28 | . link:../labs/management-monitoring.adoc[Management and monitoring]
29 |
--------------------------------------------------------------------------------
/rhsummit2019-full/environment.adoc:
--------------------------------------------------------------------------------
1 | == Environment
2 |
3 | === Lab environment
4 |
5 | The lab environment provides a virtualized setup for developing and running Kafka applications on OpenShift.
6 | It consists of:
7 |
8 | * an OpenShift cluster with one master node and three worker nodes;
9 | * a workstation;
10 |
11 | During the lab you will interact with the OpenShift cluster via CLI from the workstation and with the OpenShift web console from the browser available on your station.
12 | Make note of the URLs that you will use during the lab.
13 |
14 | |===
15 | | Machine | URL | Access
16 | | Workstation | `workstation-.rhpds.opentlc.com` | SSH (port 22)
17 | | Master (OpenShift console) | `master00-.generic.opentlc.com` | HTTPS (port 8443)
18 | |===
19 |
20 | Every lab assumes that you have access to the workstation via SSH for CLI interaction and to the OpenShift web console.
21 |
22 | Before you start any lab, make sure you are logged into the workstation via SSH with the key that has been provided by the instructors.
23 |
24 | ssh -i lab-user@workstation-.rhpds.opentlc.com
25 |
26 | If you do not have a key file, or you are not able to login with the key, use the `ssh` password provided by the instructor.
27 |
28 | For OpenShift access, the lab provides an `admin` user.
29 | It should be used for logging in both via CLI and web console.
30 | The password will be provided by the instructors.
31 |
32 | [NOTE]
33 | .Remember your GUID
34 | At the start of the lab you have been provided with a GUID, which uniquely identifies your lab environment.
35 | In the following instructions, replace the `` placeholder with your GUID.
36 | For example, if your GUID is `a2b3`, the URL `workstation-.example.com` becomes `workstation-a2b3.example.com`.
37 |
38 | Now, let's start the lab.
39 |
--------------------------------------------------------------------------------
/rhsummit2019/README.adoc:
--------------------------------------------------------------------------------
1 | # Running Apache Kafka on OpenShift with AMQ streams
2 |
3 | These are the modules and related labs of the "Running Apache Kafka on OpenShift with AMQ streams" workshop at the Red Hat Summit 2019.
4 |
5 | Before starting, let's get familiar with the link:./environment.adoc[lab environment].
6 |
7 | The introductory slide deck is available link:https://speakerdeck.com/mbogoevici/running-apache-kafka-on-red-hat-openshift-with-amq-streams[here].
8 |
9 | # Outline
10 |
11 | ## Module 1
12 |
13 | . link:../labs/0-to-60.adoc[AMQ Streams on OpenShift from 0 to 60]
14 |
15 | . link:../labs/production-ready-topologies.adoc[Production-ready topologies: sizing and persistence]
16 |
17 | . link:../labs/topic-management.adoc[Topic management]
18 |
19 | ## Module 2
20 |
21 | . link:../labs/clients-within-outside-OCP.adoc[Clients: within OCP and outside OCP]
22 |
23 | . link:../labs/security.adoc[Security]
24 |
25 | ## Module 3
26 |
27 | . link:../labs/mirror-maker-single-namespace.adoc[Replicating Data with MirrorMaker]
28 |
29 | . link:../labs/management-monitoring.adoc[Management and monitoring]
30 |
--------------------------------------------------------------------------------
/rhsummit2019/environment.adoc:
--------------------------------------------------------------------------------
1 | == Environment
2 |
3 | === Lab environment
4 |
5 | The lab environment provides a virtualized setup for developing and running Kafka applications on OpenShift.
6 | It consists of:
7 |
8 | * an OpenShift cluster with one master node and three worker nodes;
9 | * a workstation;
10 |
11 | During the lab you will interact with the OpenShift cluster via CLI from the workstation and with the OpenShift web console from the browser available on your station.
12 | Make note of the URLs that you will use during the lab.
13 |
14 | |===
15 | | Machine | URL | Access
16 | | Workstation | `workstation-.rhpds.opentlc.com` | SSH (port 22)
17 | | Master (OpenShift console) | `master00-.generic.opentlc.com` | HTTPS (port 8443)
18 | |===
19 |
20 | Every lab assumes that you have access to the workstation via SSH for CLI interaction and to the OpenShift web console.
21 |
22 | Before you start any lab, make sure you are logged into the workstation via SSH with the key that has been provided by the instructors.
23 |
24 | ssh -i lab-user@workstation-.rhpds.opentlc.com
25 |
26 | For OpenShift access, the lab provides an `admin` user.
27 | It should be used for logging in both via CLI and web console.
28 | The password will be provided by the instructors.
29 |
30 | [NOTE]
31 | .Remember your GUID
32 | At the start of the lab you have been provided with a GUID, which uniquely identifies your lab environment.
33 | In the following instructions, replace the `` placeholder with your GUID.
34 | For example, if your GUID is `a2b3`, the URL `workstation-.example.com` becomes `workstation-a2b3.example.com`.
35 |
36 | Now, let's start the lab.
37 |
--------------------------------------------------------------------------------
/slides/README.adoc:
--------------------------------------------------------------------------------
1 | # What is this ?
2 |
3 | This is the slides section of the AMQ Streams workshop.
4 |
--------------------------------------------------------------------------------