├── .gitattributes ├── .gitignore ├── ConsistentHashing └── RabbitMqSummit │ ├── net │ ├── RabbitConsumer.sln │ └── RabbitConsumer │ │ ├── ClientTask.cs │ │ ├── Program.cs │ │ └── RabbitConsumer.csproj │ └── python │ ├── client │ ├── command_args.py │ ├── consumer-dedup.py │ ├── consumer.py │ ├── fire-and-forget.py │ ├── orders_producer.py │ ├── output-consumer.py │ ├── send-sequence.py │ ├── send-state-updates-direct.py │ └── send-state-updates-hash-ex.py │ ├── cluster │ ├── blockade-files │ │ ├── blockade-3nodes.yml │ │ └── blockade-6nodes.yml │ ├── blockade.yml │ ├── cluster-entrypoint.sh │ ├── declare-hashing-infra.py │ ├── declare-queue.py │ ├── deploy-cluster.sh │ ├── enable-c-hash-ex.sh │ ├── get-node-ip.sh │ ├── kill-and-reset-node.sh │ ├── kill-node.sh │ ├── rabbitmq.config │ ├── restart-node.sh │ ├── start-node.sh │ └── stop-remove-all-running-containers.sh │ ├── data-locality-notes.txt │ ├── message-ordering-notes.txt │ └── requirements.txt ├── IntegrationTesting └── RabbitMQTestExamples │ ├── RabbitMQTestExamples.ConsoleApp │ ├── Consumer.cs │ ├── IMessageProcessor.cs │ ├── Program.cs │ └── RabbitMQTestExamples.ConsoleApp.csproj │ ├── RabbitMQTestExamples.IntegrationTests │ ├── FakeProcessor.cs │ ├── Helpers │ │ ├── ConnectionKiller.cs │ │ ├── QueueCreator.cs │ │ └── QueueDestroyer.cs │ ├── RabbitMQTestExamples.IntegrationTests.csproj │ ├── TestMessageReceipt.cs │ └── TestPublisher.cs │ └── RabbitMQTestExamples.sln ├── LICENSE ├── Publishing └── Net461 │ └── RabbitMqMessageTracking │ ├── RabbitMqMessageTracking.sln │ └── RabbitMqMessageTracking │ ├── App.config │ ├── BulkMessagePublisher.cs │ ├── IMessageState.cs │ ├── IMessageTracker.cs │ ├── MessageState.cs │ ├── MessageTracker.cs │ ├── Program.cs │ ├── Properties │ └── AssemblyInfo.cs │ ├── RabbitMqMessageTracking.csproj │ ├── SendStatus.cs │ ├── SingleMessagePublisher.cs │ └── packages.config └── README.md /.gitattributes: -------------------------------------------------------------------------------- 1 | ############################################################################### 2 | # Set default behavior to automatically normalize line endings. 3 | ############################################################################### 4 | * text=auto 5 | 6 | ############################################################################### 7 | # Set default behavior for command prompt diff. 8 | # 9 | # This is need for earlier builds of msysgit that does not have it on by 10 | # default for csharp files. 11 | # Note: This is only used by command line 12 | ############################################################################### 13 | #*.cs diff=csharp 14 | 15 | ############################################################################### 16 | # Set the merge driver for project and solution files 17 | # 18 | # Merging from the command prompt will add diff markers to the files if there 19 | # are conflicts (Merging from VS is not affected by the settings below, in VS 20 | # the diff markers are never inserted). Diff markers may cause the following 21 | # file extensions to fail to load in VS. An alternative would be to treat 22 | # these files as binary and thus will always conflict and require user 23 | # intervention with every merge. To do so, just uncomment the entries below 24 | ############################################################################### 25 | #*.sln merge=binary 26 | #*.csproj merge=binary 27 | #*.vbproj merge=binary 28 | #*.vcxproj merge=binary 29 | #*.vcproj merge=binary 30 | #*.dbproj merge=binary 31 | #*.fsproj merge=binary 32 | #*.lsproj merge=binary 33 | #*.wixproj merge=binary 34 | #*.modelproj merge=binary 35 | #*.sqlproj merge=binary 36 | #*.wwaproj merge=binary 37 | 38 | ############################################################################### 39 | # behavior for image files 40 | # 41 | # image files are treated as binary by default. 42 | ############################################################################### 43 | #*.jpg binary 44 | #*.png binary 45 | #*.gif binary 46 | 47 | ############################################################################### 48 | # diff behavior for common document formats 49 | # 50 | # Convert binary document formats to text before diffing them. This feature 51 | # is only available from the command line. Turn it on by uncommenting the 52 | # entries below. 53 | ############################################################################### 54 | #*.doc diff=astextplain 55 | #*.DOC diff=astextplain 56 | #*.docx diff=astextplain 57 | #*.DOCX diff=astextplain 58 | #*.dot diff=astextplain 59 | #*.DOT diff=astextplain 60 | #*.pdf diff=astextplain 61 | #*.PDF diff=astextplain 62 | #*.rtf diff=astextplain 63 | #*.RTF diff=astextplain 64 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ## Ignore Visual Studio temporary files, build results, and 2 | ## files generated by popular Visual Studio add-ons. 3 | 4 | # User-specific files 5 | *.suo 6 | *.user 7 | *.userosscache 8 | *.sln.docstates 9 | 10 | # User-specific files (MonoDevelop/Xamarin Studio) 11 | *.userprefs 12 | 13 | # Build results 14 | [Dd]ebug/ 15 | [Dd]ebugPublic/ 16 | [Rr]elease/ 17 | [Rr]eleases/ 18 | x64/ 19 | x86/ 20 | bld/ 21 | [Bb]in/ 22 | [Oo]bj/ 23 | [Ll]og/ 24 | 25 | # Visual Studio 2015 cache/options directory 26 | .vs/ 27 | # Uncomment if you have tasks that create the project's static files in wwwroot 28 | #wwwroot/ 29 | 30 | # MSTest test Results 31 | [Tt]est[Rr]esult*/ 32 | [Bb]uild[Ll]og.* 33 | 34 | # NUNIT 35 | *.VisualState.xml 36 | TestResult.xml 37 | 38 | # Build Results of an ATL Project 39 | [Dd]ebugPS/ 40 | [Rr]eleasePS/ 41 | dlldata.c 42 | 43 | # DNX 44 | project.lock.json 45 | artifacts/ 46 | 47 | *_i.c 48 | *_p.c 49 | *_i.h 50 | *.ilk 51 | *.meta 52 | *.obj 53 | *.pch 54 | *.pdb 55 | *.pgc 56 | *.pgd 57 | *.rsp 58 | *.sbr 59 | *.tlb 60 | *.tli 61 | *.tlh 62 | *.tmp 63 | *.tmp_proj 64 | *.log 65 | *.vspscc 66 | *.vssscc 67 | .builds 68 | *.pidb 69 | *.svclog 70 | *.scc 71 | 72 | # Chutzpah Test files 73 | _Chutzpah* 74 | 75 | # Visual C++ cache files 76 | ipch/ 77 | *.aps 78 | *.ncb 79 | *.opendb 80 | *.opensdf 81 | *.sdf 82 | *.cachefile 83 | *.VC.db 84 | *.VC.VC.opendb 85 | 86 | # Visual Studio profiler 87 | *.psess 88 | *.vsp 89 | *.vspx 90 | *.sap 91 | 92 | # TFS 2012 Local Workspace 93 | $tf/ 94 | 95 | # Guidance Automation Toolkit 96 | *.gpState 97 | 98 | # ReSharper is a .NET coding add-in 99 | _ReSharper*/ 100 | *.[Rr]e[Ss]harper 101 | *.DotSettings.user 102 | 103 | # JustCode is a .NET coding add-in 104 | .JustCode 105 | 106 | # TeamCity is a build add-in 107 | _TeamCity* 108 | 109 | # DotCover is a Code Coverage Tool 110 | *.dotCover 111 | 112 | # NCrunch 113 | _NCrunch_* 114 | .*crunch*.local.xml 115 | nCrunchTemp_* 116 | 117 | # MightyMoose 118 | *.mm.* 119 | AutoTest.Net/ 120 | 121 | # Web workbench (sass) 122 | .sass-cache/ 123 | 124 | # Installshield output folder 125 | [Ee]xpress/ 126 | 127 | # DocProject is a documentation generator add-in 128 | DocProject/buildhelp/ 129 | DocProject/Help/*.HxT 130 | DocProject/Help/*.HxC 131 | DocProject/Help/*.hhc 132 | DocProject/Help/*.hhk 133 | DocProject/Help/*.hhp 134 | DocProject/Help/Html2 135 | DocProject/Help/html 136 | 137 | # Click-Once directory 138 | publish/ 139 | 140 | # Publish Web Output 141 | *.[Pp]ublish.xml 142 | *.azurePubxml 143 | # TODO: Comment the next line if you want to checkin your web deploy settings 144 | # but database connection strings (with potential passwords) will be unencrypted 145 | *.pubxml 146 | *.publishproj 147 | 148 | # Microsoft Azure Web App publish settings. Comment the next line if you want to 149 | # checkin your Azure Web App publish settings, but sensitive information contained 150 | # in these scripts will be unencrypted 151 | PublishScripts/ 152 | 153 | # NuGet Packages 154 | *.nupkg 155 | # The packages folder can be ignored because of Package Restore 156 | **/packages/* 157 | # except build/, which is used as an MSBuild target. 158 | !**/packages/build/ 159 | # Uncomment if necessary however generally it will be regenerated when needed 160 | #!**/packages/repositories.config 161 | # NuGet v3's project.json files produces more ignoreable files 162 | *.nuget.props 163 | *.nuget.targets 164 | 165 | # Microsoft Azure Build Output 166 | csx/ 167 | *.build.csdef 168 | 169 | # Microsoft Azure Emulator 170 | ecf/ 171 | rcf/ 172 | 173 | # Windows Store app package directories and files 174 | AppPackages/ 175 | BundleArtifacts/ 176 | Package.StoreAssociation.xml 177 | _pkginfo.txt 178 | 179 | # Visual Studio cache files 180 | # files ending in .cache can be ignored 181 | *.[Cc]ache 182 | # but keep track of directories ending in .cache 183 | !*.[Cc]ache/ 184 | 185 | # Others 186 | ClientBin/ 187 | ~$* 188 | *~ 189 | *.dbmdl 190 | *.dbproj.schemaview 191 | *.pfx 192 | *.publishsettings 193 | node_modules/ 194 | orleans.codegen.cs 195 | 196 | # Since there are multiple workflows, uncomment next line to ignore bower_components 197 | # (https://github.com/github/gitignore/pull/1529#issuecomment-104372622) 198 | #bower_components/ 199 | 200 | # RIA/Silverlight projects 201 | Generated_Code/ 202 | 203 | # Backup & report files from converting an old project file 204 | # to a newer Visual Studio version. Backup files are not needed, 205 | # because we have git ;-) 206 | _UpgradeReport_Files/ 207 | Backup*/ 208 | UpgradeLog*.XML 209 | UpgradeLog*.htm 210 | 211 | # SQL Server files 212 | *.mdf 213 | *.ldf 214 | 215 | # Business Intelligence projects 216 | *.rdl.data 217 | *.bim.layout 218 | *.bim_*.settings 219 | 220 | # Microsoft Fakes 221 | FakesAssemblies/ 222 | 223 | # GhostDoc plugin setting file 224 | *.GhostDoc.xml 225 | 226 | # Node.js Tools for Visual Studio 227 | .ntvs_analysis.dat 228 | 229 | # Visual Studio 6 build log 230 | *.plg 231 | 232 | # Visual Studio 6 workspace options file 233 | *.opt 234 | 235 | # Visual Studio LightSwitch build output 236 | **/*.HTMLClient/GeneratedArtifacts 237 | **/*.DesktopClient/GeneratedArtifacts 238 | **/*.DesktopClient/ModelManifest.xml 239 | **/*.Server/GeneratedArtifacts 240 | **/*.Server/ModelManifest.xml 241 | _Pvt_Extensions 242 | 243 | # Paket dependency manager 244 | .paket/paket.exe 245 | paket-files/ 246 | 247 | # FAKE - F# Make 248 | .fake/ 249 | 250 | # JetBrains Rider 251 | .idea/ 252 | *.sln.iml 253 | 254 | # Blocakde 255 | .blockade/ 256 | 257 | # python 258 | venv*/ 259 | vr/ 260 | __pycache__ -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/net/RabbitConsumer.sln: -------------------------------------------------------------------------------- 1 |  2 | Microsoft Visual Studio Solution File, Format Version 12.00 3 | # Visual Studio 15 4 | VisualStudioVersion = 15.0.27703.2000 5 | MinimumVisualStudioVersion = 10.0.40219.1 6 | Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RabbitConsumer", "RabbitConsumer\RabbitConsumer.csproj", "{9B978CCA-49D0-436B-9491-81CB4B104BFF}" 7 | EndProject 8 | Global 9 | GlobalSection(SolutionConfigurationPlatforms) = preSolution 10 | Debug|Any CPU = Debug|Any CPU 11 | Release|Any CPU = Release|Any CPU 12 | EndGlobalSection 13 | GlobalSection(ProjectConfigurationPlatforms) = postSolution 14 | {9B978CCA-49D0-436B-9491-81CB4B104BFF}.Debug|Any CPU.ActiveCfg = Debug|Any CPU 15 | {9B978CCA-49D0-436B-9491-81CB4B104BFF}.Debug|Any CPU.Build.0 = Debug|Any CPU 16 | {9B978CCA-49D0-436B-9491-81CB4B104BFF}.Release|Any CPU.ActiveCfg = Release|Any CPU 17 | {9B978CCA-49D0-436B-9491-81CB4B104BFF}.Release|Any CPU.Build.0 = Release|Any CPU 18 | EndGlobalSection 19 | GlobalSection(SolutionProperties) = preSolution 20 | HideSolutionNode = FALSE 21 | EndGlobalSection 22 | GlobalSection(ExtensibilityGlobals) = postSolution 23 | SolutionGuid = {9BFB2140-A073-4537-AB94-9122E9DFE98F} 24 | EndGlobalSection 25 | EndGlobal 26 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/net/RabbitConsumer/ClientTask.cs: -------------------------------------------------------------------------------- 1 | using System.Threading; 2 | using System.Threading.Tasks; 3 | 4 | namespace RabbitConsumer 5 | { 6 | public class ClientTask 7 | { 8 | public CancellationTokenSource Cts { get; set; } 9 | public Task Client { get; set; } 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/net/RabbitConsumer/Program.cs: -------------------------------------------------------------------------------- 1 | using Microsoft.Extensions.Configuration; 2 | using RabbitMQ.Client; 3 | using RabbitMQ.Client.Events; 4 | using Rebalanser.Core; 5 | using Rebalanser.SqlServer; 6 | using System; 7 | using System.Collections.Generic; 8 | using System.Linq; 9 | using System.Text; 10 | using System.Threading; 11 | using System.Threading.Tasks; 12 | 13 | namespace RabbitConsumer 14 | { 15 | class CmdArgumentException : Exception 16 | { 17 | public CmdArgumentException(string message) 18 | : base(message) 19 | {} 20 | } 21 | 22 | class Program 23 | { 24 | static void Main(string[] args) 25 | { 26 | try 27 | { 28 | var builder = new ConfigurationBuilder().AddCommandLine(args); 29 | IConfigurationRoot configuration = builder.Build(); 30 | 31 | string mode = GetMandatoryArg(configuration, "Mode"); 32 | if (mode == "publish") 33 | { 34 | var exchange = GetMandatoryArg(configuration, "Exchange"); 35 | var stateCount = int.Parse(GetMandatoryArg(configuration, "Keys")); 36 | var messageCount = int.Parse(GetMandatoryArg(configuration, "Messages")); 37 | PublishSequence(exchange, stateCount, messageCount); 38 | } 39 | else if (mode == "input") 40 | { 41 | string consumerGroup = GetMandatoryArg(configuration, "Group"); 42 | string outputQueue = GetMandatoryArg(configuration, "OutQueue"); 43 | int minProcessingMs = 0; 44 | int maxProcessingMs = 0; 45 | if (args.Length == 4) 46 | { 47 | minProcessingMs = int.Parse(GetMandatoryArg(configuration, "MinMs")); 48 | maxProcessingMs = int.Parse(GetMandatoryArg(configuration, "MaxMs")); 49 | } 50 | RunRebalanserAsync(consumerGroup, outputQueue, minProcessingMs, maxProcessingMs).Wait(); 51 | } 52 | else if (mode == "output") 53 | { 54 | string queue = args[1]; 55 | StartConsumingAndPrinting(GetMandatoryArg(configuration, "Queue")); 56 | } 57 | else 58 | { 59 | Console.WriteLine("Unknown command"); 60 | } 61 | } 62 | catch(CmdArgumentException ex) 63 | { 64 | Console.WriteLine(ex.Message); 65 | } 66 | catch(Exception ex) 67 | { 68 | Console.WriteLine(ex.ToString()); 69 | } 70 | } 71 | 72 | static string GetMandatoryArg(IConfiguration configuration, string argName) 73 | { 74 | var value = configuration[argName]; 75 | if (string.IsNullOrEmpty(value)) 76 | throw new CmdArgumentException($"No argument {argName}"); 77 | 78 | return value; 79 | } 80 | 81 | private static List clientTasks; 82 | 83 | private static void PublishSequence(string exchange, int stateCount, int messageCount) 84 | { 85 | var states = new string[] { "a", "b", "c", "d", "e", "f", "g", "h", "i", "j" }; 86 | var stateIndex = 0; 87 | var value = 1; 88 | 89 | try 90 | { 91 | var factory = new ConnectionFactory() { HostName = "localhost" }; 92 | var connection = factory.CreateConnection(); 93 | var channel = connection.CreateModel(); 94 | 95 | try 96 | { 97 | while(value <= messageCount) 98 | { 99 | var message = $"{states[stateIndex]}={value}"; 100 | var body = Encoding.UTF8.GetBytes(message); 101 | channel.BasicPublish(exchange, stateIndex.ToString(), null, body); 102 | 103 | stateIndex++; 104 | if (stateIndex == stateCount) 105 | { 106 | stateIndex = 0; 107 | value++; 108 | } 109 | 110 | } 111 | } 112 | finally 113 | { 114 | channel.Close(); 115 | connection.Dispose(); 116 | } 117 | } 118 | catch (Exception ex) 119 | { 120 | LogError(ex.ToString()); 121 | } 122 | 123 | LogInfo("Messages sent"); 124 | } 125 | 126 | private static async Task RunRebalanserAsync(string consumerGroup, string outputQueue, int minProcessingMs, int maxProcessingMs) 127 | { 128 | Providers.Register(new SqlServerProvider("Server=(local);Database=RabbitMqScaling;Trusted_Connection=true;")); 129 | clientTasks = new List(); 130 | 131 | using (var context = new RebalanserContext()) 132 | { 133 | context.OnAssignment += (sender, args) => 134 | { 135 | var queues = context.GetAssignedResources(); 136 | foreach (var queue in queues) 137 | { 138 | StartConsumingAndPublishing(queue, outputQueue, minProcessingMs, maxProcessingMs); 139 | } 140 | }; 141 | 142 | context.OnCancelAssignment += (sender, args) => 143 | { 144 | LogInfo("Consumer subscription cancelled"); 145 | StopAllConsumption(); 146 | }; 147 | 148 | context.OnError += (sender, args) => 149 | { 150 | LogInfo($"Error: {args.Message}, automatic recovery set to: {args.AutoRecoveryEnabled}, Exception: {args.Exception.Message}"); 151 | }; 152 | 153 | await context.StartAsync(consumerGroup, new ContextOptions() { AutoRecoveryOnError = true, RestartDelay = TimeSpan.FromSeconds(30) }); 154 | 155 | Console.WriteLine("Press enter to shutdown"); 156 | while (!Console.KeyAvailable) 157 | { 158 | Thread.Sleep(100); 159 | } 160 | 161 | StopAllConsumption(); 162 | Task.WaitAll(clientTasks.Select(x => x.Client).ToArray()); 163 | } 164 | } 165 | 166 | private static void StartConsumingAndPublishing(string queueName, string outputQueue, int minProcessingMs, int maxProcessingMs) 167 | { 168 | LogInfo("Subscription started for queue: " + queueName); 169 | var cts = new CancellationTokenSource(); 170 | var rand = new Random(Guid.NewGuid().GetHashCode()); 171 | 172 | var task = Task.Factory.StartNew(() => 173 | { 174 | try 175 | { 176 | var factory = new ConnectionFactory() { HostName = "localhost" }; 177 | var connection = factory.CreateConnection(); 178 | var receiveChannel = connection.CreateModel(); 179 | var sendChannel = connection.CreateModel(); 180 | try 181 | { 182 | receiveChannel.BasicQos(0, 1, false); 183 | var consumer = new EventingBasicConsumer(receiveChannel); 184 | consumer.Received += (model, ea) => 185 | { 186 | var body = ea.Body; 187 | var message = Encoding.UTF8.GetString(body); 188 | sendChannel.BasicPublish(exchange: "", 189 | routingKey: outputQueue, 190 | basicProperties: null, 191 | body: body); 192 | receiveChannel.BasicAck(ea.DeliveryTag, false); 193 | Console.WriteLine(message); 194 | 195 | if (maxProcessingMs > 0) 196 | { 197 | var waitMs = rand.Next(minProcessingMs, maxProcessingMs); 198 | Thread.Sleep(waitMs); 199 | } 200 | }; 201 | 202 | receiveChannel.BasicConsume(queue: queueName, 203 | autoAck: false, 204 | consumer: consumer); 205 | 206 | while (!cts.Token.IsCancellationRequested) 207 | Thread.Sleep(100); 208 | } 209 | finally 210 | { 211 | receiveChannel.Close(); 212 | sendChannel.Close(); 213 | connection.Dispose(); 214 | } 215 | } 216 | catch (Exception ex) 217 | { 218 | LogError(ex.ToString()); 219 | } 220 | 221 | if (cts.Token.IsCancellationRequested) 222 | { 223 | //LogInfo("Cancellation signal received for " + queueName); 224 | } 225 | else 226 | LogInfo("Consumer stopped for " + queueName); 227 | }, TaskCreationOptions.LongRunning); 228 | 229 | clientTasks.Add(new ClientTask() { Cts = cts, Client = task }); 230 | } 231 | 232 | private static void StopAllConsumption() 233 | { 234 | foreach (var ct in clientTasks) 235 | { 236 | ct.Cts.Cancel(); 237 | } 238 | } 239 | 240 | private static void StartConsumingAndPrinting(string queueName) 241 | { 242 | var cts = new CancellationTokenSource(); 243 | 244 | try 245 | { 246 | var factory = new ConnectionFactory() { HostName = "localhost" }; 247 | var connection = factory.CreateConnection(); 248 | var receiveChannel = connection.CreateModel(); 249 | 250 | var states = new Dictionary(); 251 | try 252 | { 253 | var consumer = new EventingBasicConsumer(receiveChannel); 254 | consumer.Received += (model, ea) => 255 | { 256 | var body = ea.Body; 257 | var message = Encoding.UTF8.GetString(body); 258 | var parts = message.Split("="); 259 | var key = parts[0]; 260 | var currValue = int.Parse(parts[1]); 261 | 262 | if (states.ContainsKey(key)) 263 | { 264 | var lastValue = states[key]; 265 | 266 | if (lastValue + 1 < currValue) 267 | Console.WriteLine($"{message} JUMP FORWARDS {currValue - lastValue}"); 268 | else if (currValue < lastValue) 269 | Console.WriteLine($"{message} JUMP BACKWARDS {lastValue - currValue}"); 270 | else 271 | Console.WriteLine(message); 272 | 273 | states[key] = currValue; 274 | } 275 | else 276 | { 277 | if(currValue > 1) 278 | Console.WriteLine($"{message} JUMP FORWARDS {currValue}"); 279 | else 280 | Console.WriteLine(message); 281 | 282 | states.Add(key, currValue); 283 | } 284 | 285 | receiveChannel.BasicAck(ea.DeliveryTag, false); 286 | }; 287 | 288 | receiveChannel.BasicConsume(queue: queueName, 289 | autoAck: false, 290 | consumer: consumer); 291 | 292 | while (!cts.Token.IsCancellationRequested) 293 | Thread.Sleep(100); 294 | } 295 | finally 296 | { 297 | receiveChannel.Close(); 298 | connection.Dispose(); 299 | } 300 | } 301 | catch (Exception ex) 302 | { 303 | LogError(ex.ToString()); 304 | } 305 | 306 | if (cts.Token.IsCancellationRequested) 307 | LogInfo("Cancellation signal received for " + queueName); 308 | else 309 | LogInfo("Consumer stopped for " + queueName); 310 | } 311 | 312 | private static void LogInfo(string text) 313 | { 314 | Console.WriteLine($"{DateTime.Now.ToString("hh:mm:ss,fff")}: INFO : {text}"); 315 | } 316 | 317 | private static void LogError(string text) 318 | { 319 | Console.WriteLine($"{DateTime.Now.ToString("hh:mm:ss,fff")}: ERROR : {text}"); 320 | } 321 | } 322 | } 323 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/net/RabbitConsumer/RabbitConsumer.csproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Exe 5 | netcoreapp2.0 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/command_args.py: -------------------------------------------------------------------------------- 1 | 2 | def get_args(args): 3 | args_dict = dict() 4 | index = 1 5 | while index < len(args): 6 | key = args[index] 7 | value = args[index+1] 8 | args_dict[key] = value 9 | index += 2 10 | 11 | return args_dict 12 | 13 | def get_mandatory_arg(args_dict, key): 14 | if key in args_dict: 15 | return args_dict[key] 16 | else: 17 | print(f"Missing mandatory argument {key}") 18 | exit(1) 19 | 20 | def get_optional_arg(args_dict, key, default_value): 21 | if key in args_dict: 22 | return args_dict[key] 23 | else: 24 | return default_value 25 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/consumer-dedup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | from pika import spec 4 | import time 5 | import datetime 6 | import sys 7 | import random 8 | import subprocess 9 | from command_args import get_args, get_mandatory_arg, get_optional_arg 10 | 11 | 12 | class RabbitConsumer: 13 | connection = None 14 | receive_channel = None 15 | publish_channel = None 16 | queue_name = "" 17 | out_queue_name = "" 18 | processing_ms_min = 0 19 | processing_ms_max = 0 20 | # in a production system you would need a data structure that could expire items over time 21 | history = set() 22 | last_msg_time = datetime.date.now() 23 | 24 | def get_node_ip(self, node_name): 25 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 26 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 27 | output, error = process.communicate() 28 | ip = output.decode('ascii').replace('\n', '') 29 | return ip 30 | 31 | def connect(self, node): 32 | ip = self.get_node_ip(node) 33 | credentials = pika.PlainCredentials('jack', 'jack') 34 | parameters = pika.ConnectionParameters(ip, 35 | 5672, 36 | '/', 37 | credentials) 38 | self.connection = pika.BlockingConnection(parameters) 39 | self.receive_channel = self.connection.channel() 40 | self.publish_channel = self.connection.channel() 41 | 42 | def callback(self, ch, method, properties, body): 43 | # probably I am running a different demo without restarting this consumer 44 | if (datetime.datetime.now() - self.last_msg_time).seconds > 5: 45 | self.history.clear() 46 | print("----------------------------------") 47 | 48 | if properties.correlation_id in self.history: 49 | print("Detected and ignored duplicate") 50 | ch.basic_ack(delivery_tag = method.delivery_tag) 51 | else: 52 | self.history.add(properties.correlation_id) 53 | self.publish_channel.basic_publish(exchange='', 54 | routing_key=self.out_queue_name, 55 | body=body) 56 | 57 | ch.basic_ack(delivery_tag = method.delivery_tag) 58 | 59 | if self.processing_ms_max > 0: 60 | wait_sec = float(random.randint(self.processing_ms_min, self.processing_ms_max) / 1000) 61 | time.sleep(wait_sec) 62 | 63 | self.last_msg_time = datetime.datetime.now() 64 | 65 | def consume(self, queue, out_queue, prefetch, processing_ms_min, processing_ms_max): 66 | self.queue_name = queue 67 | self.out_queue = out_queue 68 | print(f"Consuming queue: {self.queue_name}") 69 | self.receive_channel.basic_qos(prefetch_count=prefetch) 70 | self.receive_channel.basic_consume(self.callback, 71 | queue=self.queue_name, 72 | no_ack=False) 73 | 74 | self.processing_ms_min = processing_ms_min 75 | self.processing_ms_max = processing_ms_max 76 | 77 | try: 78 | self.last_msg_time = datetime.datetime.now() 79 | self.receive_channel.start_consuming() 80 | except KeyboardInterrupt: 81 | self.disconnect() 82 | except Exception as ex: 83 | template = "An exception of type {0} occurred. Arguments:{1!r}" 84 | message = template.format(type(ex).__name__, ex.args) 85 | print(message) 86 | 87 | def disconnect(self): 88 | self.connection.close() 89 | 90 | args = get_args(sys.argv) 91 | 92 | connect_node = get_optional_arg(args, "--node", "rabbitmq1") #sys.argv[1] 93 | queue = get_mandatory_arg(args, "--queue") #sys.argv[2] 94 | out_queue = get_mandatory_arg(args, "--out-queue") 95 | prefetch = int(get_optional_arg(args, "--prefetch", "1"))#int(sys.argv[3]) 96 | processing_ms_min = int(get_optional_arg(args, "--min-ms", "0")) #int(sys.argv[4]) 97 | processing_ms_max = int(get_optional_arg(args, "--max-ms", "0")) #int(sys.argv[5]) 98 | print(f"Consuming queue: {queue} Writing to: {out_queue}") 99 | 100 | consumer = RabbitConsumer() 101 | consumer.connect(connect_node) 102 | consumer.consume(queue, out_queue, prefetch, processing_ms_min, processing_ms_max) 103 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/consumer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | from pika import spec 4 | import time 5 | import datetime 6 | import sys 7 | import random 8 | import subprocess 9 | from command_args import get_args, get_mandatory_arg, get_optional_arg 10 | 11 | 12 | class RabbitConsumer: 13 | connection = None 14 | receive_channel = None 15 | publish_channel = None 16 | queue_name = "" 17 | out_queue_name = "" 18 | processing_ms_min = 0 19 | processing_ms_max = 0 20 | # in a production system you would need a data structure that could expire items over time 21 | history = set() 22 | last_msg_time = datetime.datetime.now() 23 | dedup_enabled = False 24 | 25 | def get_node_ip(self, node_name): 26 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 27 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 28 | output, error = process.communicate() 29 | ip = output.decode('ascii').replace('\n', '') 30 | return ip 31 | 32 | def connect(self, node): 33 | ip = self.get_node_ip(node) 34 | credentials = pika.PlainCredentials('jack', 'jack') 35 | parameters = pika.ConnectionParameters(ip, 36 | 5672, 37 | '/', 38 | credentials) 39 | self.connection = pika.BlockingConnection(parameters) 40 | self.receive_channel = self.connection.channel() 41 | self.publish_channel = self.connection.channel() 42 | 43 | def callback(self, ch, method, properties, body): 44 | # probably I am running a different demo without restarting this consumer 45 | if (datetime.datetime.now() - self.last_msg_time).seconds > 5: 46 | self.history.clear() 47 | 48 | if self.dedup_enabled and properties.correlation_id in self.history: 49 | print("Detected and ignored duplicate") 50 | ch.basic_ack(delivery_tag = method.delivery_tag) 51 | else: 52 | self.history.add(properties.correlation_id) 53 | self.publish_channel.basic_publish(exchange='', 54 | routing_key=self.out_queue_name, 55 | body=body) 56 | 57 | ch.basic_ack(delivery_tag = method.delivery_tag) 58 | 59 | if self.processing_ms_max > 0: 60 | wait_sec = float(random.randint(self.processing_ms_min, self.processing_ms_max) / 1000) 61 | time.sleep(wait_sec) 62 | 63 | self.last_msg_time = datetime.datetime.now() 64 | 65 | def consume(self, queue, out_queue, prefetch, processing_ms_min, processing_ms_max, dedup_enabled): 66 | self.queue_name = queue 67 | self.out_queue_name = out_queue 68 | self.dedup_enabled = dedup_enabled 69 | self.receive_channel.basic_qos(prefetch_count=prefetch) 70 | self.receive_channel.basic_consume(self.callback, 71 | queue=self.queue_name, 72 | no_ack=False) 73 | 74 | self.processing_ms_min = processing_ms_min 75 | self.processing_ms_max = processing_ms_max 76 | 77 | try: 78 | self.receive_channel.start_consuming() 79 | except KeyboardInterrupt: 80 | self.disconnect() 81 | except Exception as ex: 82 | template = "An exception of type {0} occurred. Arguments:{1!r}" 83 | message = template.format(type(ex).__name__, ex.args) 84 | print(message) 85 | 86 | def disconnect(self): 87 | self.connection.close() 88 | 89 | args = get_args(sys.argv) 90 | 91 | connect_node = get_optional_arg(args, "--node", "rabbitmq1") 92 | queue = get_mandatory_arg(args, "--in-queue") 93 | out_queue = get_mandatory_arg(args, "--out-queue") 94 | prefetch = int(get_optional_arg(args, "--prefetch", "1")) 95 | processing_ms_min = int(get_optional_arg(args, "--min-ms", "0")) 96 | processing_ms_max = int(get_optional_arg(args, "--max-ms", "0")) 97 | dedup_enabled = get_optional_arg(args, "--dedup", "false") == "true" 98 | 99 | print(f"Consuming queue: {queue} Writing to: {out_queue}") 100 | 101 | consumer = RabbitConsumer() 102 | consumer.connect(connect_node) 103 | consumer.consume(queue, out_queue, prefetch, processing_ms_min, processing_ms_max, dedup_enabled) 104 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/fire-and-forget.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | import sys 4 | import time 5 | import subprocess 6 | 7 | target_node = sys.argv[1] 8 | count = int(sys.argv[2]) 9 | queue = sys.argv[3] 10 | 11 | node_names = ['rabbitmq1', 'rabbitmq2', 'rabbitmq3'] 12 | nodes = list() 13 | 14 | def get_node_index(node_name): 15 | index = 0 16 | for node in node_names: 17 | if node == node_name: 18 | return index 19 | 20 | index +=1 21 | 22 | return -1 23 | 24 | def connect(): 25 | global target_node, nodes, curr_node 26 | curr_node = get_node_index(target_node) 27 | while True: 28 | try: 29 | credentials = pika.credentials.PlainCredentials('jack', 'jack') 30 | connection = pika.BlockingConnection(pika.ConnectionParameters(host=nodes[curr_node], port=5672, credentials=credentials)) 31 | channel = connection.channel() 32 | print("Connected to " + nodes[curr_node]) 33 | return channel 34 | except: 35 | curr_node += 1 36 | if curr_node > 2: 37 | print("Could not connect. Trying again in 5 seconds") 38 | time.sleep(5) 39 | curr_node = 0 40 | 41 | def get_node_ip(node_name): 42 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 43 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 44 | output, error = process.communicate() 45 | ip = output.decode('ascii').replace('\n', '') 46 | return ip 47 | 48 | for node_name in node_names: 49 | nodes.append(get_node_ip(node_name)) 50 | 51 | channel = connect() 52 | 53 | success = 0 54 | fail = 0 55 | sent = 0 56 | 57 | msg = "jkhfjhjhsjsdhusdhfyfjkw4rtjn23jrnw3jkrjkwefbjsdbfjksdfsdbfbwdjhfbwejkbrjk23rjkwejkfwejkfsajkfsjkdfjksdfjksdbfjksdfjksejkdfjksdhfuiowehf3478y7834uhfuwenfnweuih34789hrtui234enfunqwef8934jhtui42398fh3uiht" 58 | 59 | for x in range(count): 60 | try: 61 | if channel.basic_publish(exchange='', 62 | routing_key=queue, 63 | body=msg, 64 | properties=pika.BasicProperties(content_type='text/plain', 65 | delivery_mode=2)): 66 | success += 1 67 | else: 68 | fail += 1 69 | 70 | sent += 1 71 | if sent % 10000 == 0: 72 | print("Success: " + str(success) + " Failed: " + str(fail)) 73 | except pika.exceptions.ConnectionClosed: 74 | print("Connection closed.") 75 | time.sleep(5) 76 | channel = connect() 77 | count += 1 # retry it 78 | 79 | time.sleep(10) 80 | print("Sent " + str(sent) + " messages") 81 | res = channel.queue_declare(queue=queue, durable=True, arguments={"x-queue-mode": "lazy"}) 82 | message_count = res.method.message_count 83 | print(str(message_count) + " messages in the queue") 84 | print(str(success - message_count) + " messages lost") 85 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/orders_producer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | from pika import spec 4 | import sys 5 | import time 6 | import subprocess 7 | import datetime 8 | 9 | connect_node = sys.argv[1] 10 | node_count = int(sys.argv[2]) 11 | count = int(sys.argv[3]) 12 | client_count = int(sys.argv[4]) 13 | 14 | terminate = False 15 | exit_triggered = False 16 | last_ack_time = datetime.datetime.now() 17 | last_ack = 0 18 | 19 | clients = [] 20 | for i in range(1, client_count+1): 21 | clients.append(f"Client{i}") 22 | 23 | node_names = [] 24 | 25 | curr_pos = 0 26 | pending_messages = list() 27 | pending_acks = list() 28 | pos_acks = 0 29 | neg_acks = 0 30 | 31 | for i in range(1, node_count+1): 32 | node_names.append(f"rabbitmq{i}") 33 | nodes = list() 34 | 35 | def get_node_index(node_name): 36 | index = 0 37 | for node in node_names: 38 | if node == node_name: 39 | return index 40 | 41 | index +=1 42 | 43 | return -1 44 | 45 | def on_open(connection): 46 | connection.channel(on_channel_open) 47 | print("Connection open") 48 | 49 | def on_channel_open(chan): 50 | global connection, channel 51 | chan.confirm_delivery(on_delivery_confirmation) 52 | channel = chan 53 | publish_messages() 54 | 55 | # this is ignoring the posibility of ack + return 56 | # do not use in production code 57 | def on_delivery_confirmation(frame): 58 | global last_ack_time, pending_messages, pos_acks, neg_acks, last_ack, count 59 | 60 | if isinstance(frame.method, spec.Basic.Ack) or isinstance(frame.method, spec.Basic.Nack): 61 | if frame.method.multiple == True: 62 | acks = 0 63 | messages_to_remove = [item for item in pending_messages if item <= frame.method.delivery_tag] 64 | for val in messages_to_remove: 65 | try: 66 | pending_messages.remove(val) 67 | except: 68 | print(f"Could not remove multiple flag message: {val}") 69 | acks += 1 70 | else: 71 | try: 72 | pending_messages.remove(frame.method.delivery_tag) 73 | except: 74 | print(f"Could not remove non-multiple flag message: {frame.method.delivery_tag}") 75 | acks = 1 76 | 77 | if isinstance(frame.method, spec.Basic.Ack): 78 | pos_acks += acks 79 | elif isinstance(frame.method, spec.Basic.Nack): 80 | neg_acks += acks 81 | elif isinstance(frame.method, spec.Basic.Return): 82 | print("Undeliverable message") 83 | 84 | 85 | curr_ack = int((pos_acks + neg_acks) / 10000) 86 | if curr_ack > last_ack: 87 | print(f"Pos acks: {pos_acks} Neg acks: {neg_acks}") 88 | last_ack = curr_ack 89 | 90 | if (pos_acks + neg_acks) >= count: 91 | print(f"Final Count => Pos acks: {pos_acks} Neg acks: {neg_acks}") 92 | connection.close() 93 | exit(0) 94 | 95 | def publish_messages(): 96 | global connection, channel, count, clients, client_count, pending_messages, curr_pos 97 | 98 | client_index = 0 99 | while curr_pos < count: 100 | if channel.is_open: 101 | curr_pos += 1 102 | msg = f"Client {clients[client_index]} Num: {curr_pos}" 103 | channel.basic_publish(exchange='orders', 104 | routing_key=str(client_index), 105 | body=msg, 106 | properties=pika.BasicProperties(content_type='text/plain', 107 | delivery_mode=2)) 108 | 109 | # channel.basic_publish(exchange='', 110 | # routing_key='orders001', 111 | # body=msg, 112 | # properties=pika.BasicProperties(content_type='text/plain', 113 | # delivery_mode=2)) 114 | 115 | pending_messages.append(curr_pos) 116 | 117 | if curr_pos % 1000 == 0: 118 | if len(pending_messages) > 10000: 119 | #print("Reached in-flight limit, pausing publishing for 2 seconds") 120 | if channel.is_open: 121 | connection.add_timeout(2, publish_messages) 122 | break 123 | 124 | client_index += 1 125 | if client_index == client_count: 126 | client_index = 0 127 | 128 | else: 129 | print("Channel closed, ceasing publishing") 130 | break 131 | 132 | def on_close(connection, reason_code, reason_text): 133 | connection.ioloop.stop() 134 | print("Connection closed. Reason: " + reason_text) 135 | 136 | def reconnect(): 137 | print("Reconnect called") 138 | global curr_node 139 | curr_node += 1 140 | if curr_node > 2: 141 | print("Failed to connect. Will retry in 5 seconds") 142 | time.sleep(5) 143 | curr_node = 0 144 | 145 | connect() 146 | 147 | def connect(): 148 | global connection, curr_node, terminate 149 | print("Attempting to connect to " + nodes[curr_node]) 150 | parameters = pika.URLParameters('amqp://jack:jack@' + nodes[curr_node] + ':5672/%2F') 151 | connection = pika.SelectConnection(parameters=parameters, 152 | on_open_callback=on_open, 153 | on_open_error_callback=reconnect, 154 | on_close_callback=on_close) 155 | 156 | try: 157 | connection.ioloop.start() 158 | except KeyboardInterrupt: 159 | connection.close() 160 | connection.ioloop.stop() 161 | terminate = True 162 | except Exception as ex: 163 | template = "An exception of type {0} occurred. Arguments:{1!r}" 164 | message = template.format(type(ex).__name__, ex.args) 165 | print(message) 166 | 167 | print("Disconnected") 168 | 169 | def get_node_ip(node_name): 170 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 171 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 172 | output, error = process.communicate() 173 | ip = output.decode('ascii').replace('\n', '') 174 | return ip 175 | 176 | for node_name in node_names: 177 | nodes.append(get_node_ip(node_name)) 178 | 179 | curr_node = get_node_index(connect_node) 180 | 181 | # keep running until the terminate signal has been received 182 | while terminate == False: 183 | try: 184 | connect() 185 | except Exception as ex: 186 | template = "An exception of type {0} occurred. Arguments:{1!r}" 187 | message = template.format(type(ex).__name__, ex.args) 188 | print(message) 189 | 190 | if terminate == False: 191 | reconnect() 192 | 193 | # sec_since_last_ack = (datetime.datetime.now() - last_ack_time).seconds 194 | # while sec_since_last_ack < 15: 195 | # sec_since_last_ack = (datetime.datetime.now() - last_ack_time).seconds 196 | # time.sleep(1) 197 | 198 | # # this is not true, but because of this bug https://github.com/pika/pika/issues/1137 199 | # # I am unable to know when all acks have been received 200 | # print(f"Pos acks: {pos_acks} Neg acks: {neg_acks}") 201 | # connection.close() 202 | # exit(0) -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/output-consumer.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | import sys 4 | import time 5 | import subprocess 6 | import datetime 7 | import threading 8 | from command_args import get_args, get_mandatory_arg, get_optional_arg 9 | 10 | def get_node_ip(node_name): 11 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 12 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 13 | output, error = process.communicate() 14 | ip = output.decode('ascii').replace('\n', '') 15 | return ip 16 | 17 | def monitor(): 18 | global keys, last_msg_time 19 | 20 | while(True): 21 | # probably I am running a different demo without restarting this consumer 22 | if len(keys) > 0 and (datetime.datetime.now() - last_msg_time).seconds > 2: 23 | final_state = "" 24 | for key, value in keys.items(): 25 | final_state += key + "=" + str(value) + " " 26 | print(final_state) 27 | keys.clear() 28 | history.clear() 29 | print("----------------------------------") 30 | time.sleep(1) 31 | 32 | def callback(ch, method, properties, body): 33 | global keys, last_msg_time 34 | 35 | # # probably I am running a different demo without restarting this consumer 36 | # if (datetime.datetime.now() - last_msg_time).seconds > 2: 37 | # final_state = "" 38 | # for key, value in keys: 39 | # final_state += key + "=" + value + " " 40 | # print(final_state) 41 | # keys.clear() 42 | # history.clear() 43 | # print("----------------------------------") 44 | 45 | body_str = str(body, "utf-8") 46 | parts = body_str.split('=') 47 | key = parts[0] 48 | curr_value = int(parts[1]) 49 | 50 | duplicate = "" 51 | if body_str in history: 52 | duplicate = " DUPLICATE" 53 | history.add(body_str) 54 | 55 | if key in keys: 56 | last_value = keys[key] 57 | 58 | if last_value + 1 < curr_value: 59 | jump = curr_value - last_value 60 | print(f"{body} Jump forward {jump} {duplicate}") 61 | elif last_value > curr_value: 62 | jump = last_value - curr_value 63 | print(f"{body} Jump back {jump} {duplicate}") 64 | else: 65 | print(f"{body} {duplicate}") 66 | else: 67 | if curr_value == 1: 68 | print(f"{body} {duplicate}") 69 | else: 70 | print(f"{body} Jump forward {curr_value} {duplicate}") 71 | 72 | keys[key] = curr_value 73 | 74 | ch.basic_ack(delivery_tag = method.delivery_tag) 75 | last_msg_time = datetime.datetime.now() 76 | 77 | args = get_args(sys.argv) 78 | connect_node = get_optional_arg(args, "--node", "rabbitmq1") #sys.argv[1] 79 | ip = get_node_ip(connect_node) 80 | 81 | queue = get_mandatory_arg(args, "--queue") 82 | 83 | keys = dict() 84 | history = set() 85 | last_msg_time = datetime.datetime.now() 86 | 87 | monitor_thread = threading.Thread(target=monitor) 88 | monitor_thread.start() 89 | 90 | credentials = pika.PlainCredentials('jack', 'jack') 91 | parameters = pika.ConnectionParameters(ip, 92 | 5672, 93 | '/', 94 | credentials) 95 | connection = pika.BlockingConnection(parameters) 96 | channel = connection.channel() 97 | 98 | channel.basic_qos(prefetch_count=1) 99 | channel.basic_consume(callback, 100 | queue=queue) 101 | 102 | try: 103 | channel.start_consuming() 104 | except KeyboardInterrupt: 105 | connection.close() 106 | except Exception as ex: 107 | template = "An exception of type {0} occurred. Arguments:{1!r}" 108 | message = template.format(type(ex).__name__, ex.args) 109 | print(message) 110 | connection.close() -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/send-sequence.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | from pika import spec 4 | import sys 5 | import time 6 | import subprocess 7 | import datetime 8 | 9 | connect_node = sys.argv[1] 10 | node_count = int(sys.argv[2]) 11 | count = int(sys.argv[3]) 12 | queue = sys.argv[4] 13 | 14 | terminate = False 15 | exit_triggered = False 16 | last_ack_time = datetime.datetime.now() 17 | last_ack = 0 18 | 19 | node_names = [] 20 | 21 | curr_pos = 0 22 | pending_messages = list() 23 | pending_acks = list() 24 | pos_acks = 0 25 | neg_acks = 0 26 | 27 | for i in range(1, node_count+1): 28 | node_names.append(f"rabbitmq{i}") 29 | nodes = list() 30 | 31 | def get_node_index(node_name): 32 | index = 0 33 | for node in node_names: 34 | if node == node_name: 35 | return index 36 | 37 | index +=1 38 | 39 | return -1 40 | 41 | def on_open(connection): 42 | connection.channel(on_channel_open) 43 | print("Connection open") 44 | 45 | def on_channel_open(chan): 46 | global connection, channel 47 | chan.confirm_delivery(on_delivery_confirmation) 48 | channel = chan 49 | publish_messages() 50 | 51 | # this is ignoring the posibility of ack + return 52 | # do not use in production code 53 | def on_delivery_confirmation(frame): 54 | global last_ack_time, pending_messages, pos_acks, neg_acks, last_ack, count 55 | 56 | if isinstance(frame.method, spec.Basic.Ack) or isinstance(frame.method, spec.Basic.Nack): 57 | if frame.method.multiple == True: 58 | acks = 0 59 | messages_to_remove = [item for item in pending_messages if item <= frame.method.delivery_tag] 60 | for val in messages_to_remove: 61 | try: 62 | pending_messages.remove(val) 63 | except: 64 | print(f"Could not remove multiple flag message: {val}") 65 | acks += 1 66 | else: 67 | try: 68 | pending_messages.remove(frame.method.delivery_tag) 69 | except: 70 | print(f"Could not remove non-multiple flag message: {frame.method.delivery_tag}") 71 | acks = 1 72 | 73 | if isinstance(frame.method, spec.Basic.Ack): 74 | pos_acks += acks 75 | elif isinstance(frame.method, spec.Basic.Nack): 76 | neg_acks += acks 77 | elif isinstance(frame.method, spec.Basic.Return): 78 | print("Undeliverable message") 79 | 80 | 81 | curr_ack = int((pos_acks + neg_acks) / 10000) 82 | if curr_ack > last_ack: 83 | print(f"Pos acks: {pos_acks} Neg acks: {neg_acks}") 84 | last_ack = curr_ack 85 | 86 | if (pos_acks + neg_acks) >= count: 87 | print(f"Final Count => Pos acks: {pos_acks} Neg acks: {neg_acks}") 88 | connection.close() 89 | exit(0) 90 | 91 | def publish_messages(): 92 | global connection, channel, queue, count, pending_messages, curr_pos, state_index, val 93 | 94 | while curr_pos < count: 95 | if channel.is_open: 96 | curr_pos += 1 97 | body = f"{curr_pos}" 98 | channel.basic_publish(exchange='', 99 | routing_key=queue, 100 | body=body, 101 | properties=pika.BasicProperties(content_type='text/plain', 102 | delivery_mode=2)) 103 | 104 | pending_messages.append(curr_pos) 105 | 106 | if curr_pos % 1000 == 0: 107 | if len(pending_messages) > 10000: 108 | #print("Reached in-flight limit, pausing publishing for 2 seconds") 109 | if channel.is_open: 110 | connection.add_timeout(2, publish_messages) 111 | break 112 | 113 | else: 114 | print("Channel closed, ceasing publishing") 115 | break 116 | 117 | def on_close(connection, reason_code, reason_text): 118 | connection.ioloop.stop() 119 | print("Connection closed. Reason: " + reason_text) 120 | 121 | def reconnect(): 122 | print("Reconnect called") 123 | global curr_node 124 | curr_node += 1 125 | if curr_node > 2: 126 | print("Failed to connect. Will retry in 5 seconds") 127 | time.sleep(5) 128 | curr_node = 0 129 | 130 | connect() 131 | 132 | def connect(): 133 | global connection, curr_node, terminate 134 | print("Attempting to connect to " + nodes[curr_node]) 135 | parameters = pika.URLParameters('amqp://jack:jack@' + nodes[curr_node] + ':5672/%2F') 136 | connection = pika.SelectConnection(parameters=parameters, 137 | on_open_callback=on_open, 138 | on_open_error_callback=reconnect, 139 | on_close_callback=on_close) 140 | 141 | try: 142 | connection.ioloop.start() 143 | except KeyboardInterrupt: 144 | connection.close() 145 | connection.ioloop.stop() 146 | terminate = True 147 | except Exception as ex: 148 | template = "An exception of type {0} occurred. Arguments:{1!r}" 149 | message = template.format(type(ex).__name__, ex.args) 150 | print(message) 151 | 152 | print("Disconnected") 153 | 154 | def get_node_ip(node_name): 155 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 156 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 157 | output, error = process.communicate() 158 | ip = output.decode('ascii').replace('\n', '') 159 | return ip 160 | 161 | for node_name in node_names: 162 | nodes.append(get_node_ip(node_name)) 163 | 164 | curr_node = get_node_index(connect_node) 165 | 166 | # keep running until the terminate signal has been received 167 | while terminate == False: 168 | try: 169 | connect() 170 | except Exception as ex: 171 | template = "An exception of type {0} occurred. Arguments:{1!r}" 172 | message = template.format(type(ex).__name__, ex.args) 173 | print(message) 174 | 175 | if terminate == False: 176 | reconnect() 177 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/send-state-updates-direct.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | from pika import spec 4 | import sys 5 | import time 6 | import subprocess 7 | import datetime 8 | import uuid 9 | import random 10 | from command_args import get_args, get_mandatory_arg, get_optional_arg 11 | 12 | args = get_args(sys.argv) 13 | 14 | connect_node = get_optional_arg(args, "--node", "rabbitmq1") 15 | node_count = int(get_optional_arg(args, "--cluster-size", "3")) 16 | queue = get_mandatory_arg(args, "--queue") 17 | count = int(get_mandatory_arg(args, "--msgs")) 18 | state_count = int(get_mandatory_arg(args, "--keys")) 19 | dup_rate = float(get_optional_arg(args, "--dup-rate", "0")) 20 | total = count * state_count 21 | 22 | if state_count > 10: 23 | print("Key count limit is 10") 24 | exit(1) 25 | 26 | terminate = False 27 | exit_triggered = False 28 | last_ack_time = datetime.datetime.now() 29 | last_ack = 0 30 | 31 | node_names = [] 32 | 33 | curr_pos = 0 34 | pending_messages = list() 35 | pending_acks = list() 36 | pos_acks = 0 37 | neg_acks = 0 38 | state_index = 0 39 | states = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] 40 | val = 1 41 | 42 | for i in range(1, node_count+1): 43 | node_names.append(f"rabbitmq{i}") 44 | nodes = list() 45 | 46 | def get_node_index(node_name): 47 | index = 0 48 | for node in node_names: 49 | if node == node_name: 50 | return index 51 | 52 | index +=1 53 | 54 | return -1 55 | 56 | def on_open(connection): 57 | connection.channel(on_channel_open) 58 | print("Connection open") 59 | 60 | def on_channel_open(chan): 61 | global connection, channel 62 | chan.confirm_delivery(on_delivery_confirmation) 63 | channel = chan 64 | publish_messages() 65 | 66 | # this is ignoring the posibility of ack + return 67 | # do not use in production code 68 | def on_delivery_confirmation(frame): 69 | global last_ack_time, pending_messages, pos_acks, neg_acks, last_ack, count, total 70 | 71 | if isinstance(frame.method, spec.Basic.Ack) or isinstance(frame.method, spec.Basic.Nack): 72 | if frame.method.multiple == True: 73 | acks = 0 74 | messages_to_remove = [item for item in pending_messages if item <= frame.method.delivery_tag] 75 | for val in messages_to_remove: 76 | try: 77 | pending_messages.remove(val) 78 | except: 79 | print(f"Could not remove multiple flag message: {val}") 80 | acks += 1 81 | else: 82 | try: 83 | pending_messages.remove(frame.method.delivery_tag) 84 | except: 85 | print(f"Could not remove non-multiple flag message: {frame.method.delivery_tag}") 86 | acks = 1 87 | 88 | if isinstance(frame.method, spec.Basic.Ack): 89 | pos_acks += acks 90 | elif isinstance(frame.method, spec.Basic.Nack): 91 | neg_acks += acks 92 | elif isinstance(frame.method, spec.Basic.Return): 93 | print("Undeliverable message") 94 | 95 | 96 | curr_ack = int((pos_acks + neg_acks) / 10000) 97 | if curr_ack > last_ack: 98 | print(f"Pos acks: {pos_acks} Neg acks: {neg_acks}") 99 | last_ack = curr_ack 100 | 101 | if (pos_acks + neg_acks) >= total: 102 | print(f"Final Count => Pos acks: {pos_acks} Neg acks: {neg_acks}") 103 | connection.close() 104 | exit(0) 105 | 106 | def publish_messages(): 107 | global connection, channel, queue, count, pending_messages, curr_pos, states, state_count, state_index, val, dup_rate 108 | 109 | while curr_pos < total: 110 | if channel.is_open: 111 | curr_pos += 1 112 | body = f"{states[state_index]}={val}" 113 | corr_id = str(uuid.uuid4()) 114 | channel.basic_publish(exchange='', 115 | routing_key=queue, 116 | body=body, 117 | properties=pika.BasicProperties(content_type='text/plain', 118 | delivery_mode=2, 119 | correlation_id=corr_id)) 120 | 121 | # potentially send a duplicate if enabled 122 | if dup_rate > 0: 123 | if random.uniform(0, 1) < dup_rate: 124 | channel.basic_publish(exchange='', 125 | routing_key=queue, 126 | body=body, 127 | properties=pika.BasicProperties(content_type='text/plain', 128 | delivery_mode=2, 129 | correlation_id=corr_id)) 130 | 131 | pending_messages.append(curr_pos) 132 | 133 | state_index += 1 134 | if state_index == state_count: 135 | state_index = 0 136 | val += 1 137 | 138 | if curr_pos % 1000 == 0: 139 | if len(pending_messages) > 10000: 140 | #print("Reached in-flight limit, pausing publishing for 2 seconds") 141 | if channel.is_open: 142 | connection.add_timeout(2, publish_messages) 143 | break 144 | 145 | else: 146 | print("Channel closed, ceasing publishing") 147 | break 148 | 149 | def on_close(connection, reason_code, reason_text): 150 | connection.ioloop.stop() 151 | print("Connection closed. Reason: " + reason_text) 152 | 153 | def reconnect(): 154 | print("Reconnect called") 155 | global curr_node 156 | curr_node += 1 157 | if curr_node > 2: 158 | print("Failed to connect. Will retry in 5 seconds") 159 | time.sleep(5) 160 | curr_node = 0 161 | 162 | connect() 163 | 164 | def connect(): 165 | global connection, curr_node, terminate 166 | print("Attempting to connect to " + nodes[curr_node]) 167 | parameters = pika.URLParameters('amqp://jack:jack@' + nodes[curr_node] + ':5672/%2F') 168 | connection = pika.SelectConnection(parameters=parameters, 169 | on_open_callback=on_open, 170 | on_open_error_callback=reconnect, 171 | on_close_callback=on_close) 172 | 173 | try: 174 | connection.ioloop.start() 175 | except KeyboardInterrupt: 176 | connection.close() 177 | connection.ioloop.stop() 178 | terminate = True 179 | except Exception as ex: 180 | template = "An exception of type {0} occurred. Arguments:{1!r}" 181 | message = template.format(type(ex).__name__, ex.args) 182 | print(message) 183 | 184 | print("Disconnected") 185 | 186 | def get_node_ip(node_name): 187 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 188 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 189 | output, error = process.communicate() 190 | ip = output.decode('ascii').replace('\n', '') 191 | return ip 192 | 193 | for node_name in node_names: 194 | nodes.append(get_node_ip(node_name)) 195 | 196 | curr_node = get_node_index(connect_node) 197 | 198 | # keep running until the terminate signal has been received 199 | while terminate == False: 200 | try: 201 | connect() 202 | except Exception as ex: 203 | template = "An exception of type {0} occurred. Arguments:{1!r}" 204 | message = template.format(type(ex).__name__, ex.args) 205 | print(message) 206 | 207 | if terminate == False: 208 | reconnect() 209 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/client/send-state-updates-hash-ex.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | from pika import spec 4 | import sys 5 | import time 6 | import subprocess 7 | import datetime 8 | import uuid 9 | import random 10 | from command_args import get_args, get_mandatory_arg, get_optional_arg 11 | 12 | args = get_args(sys.argv) 13 | 14 | connect_node = get_optional_arg(args, "--node", "rabbitmq1") 15 | node_count = int(get_optional_arg(args, "--cluster-size", "3")) 16 | exchange = get_mandatory_arg(args, "--ex") 17 | count = int(get_mandatory_arg(args, "--msgs")) 18 | state_count = int(get_mandatory_arg(args, "--keys")) 19 | dup_rate = float(get_optional_arg(args, "--dup-rate", "0")) 20 | total = count * state_count 21 | 22 | if state_count > 10: 23 | print("State count limit is 10") 24 | exit(1) 25 | 26 | terminate = False 27 | exit_triggered = False 28 | last_ack_time = datetime.datetime.now() 29 | last_ack = 0 30 | 31 | node_names = [] 32 | 33 | curr_pos = 0 34 | pending_messages = list() 35 | pending_acks = list() 36 | pos_acks = 0 37 | neg_acks = 0 38 | state_index = 0 39 | states = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] 40 | val = 1 41 | 42 | for i in range(1, node_count+1): 43 | node_names.append(f"rabbitmq{i}") 44 | nodes = list() 45 | 46 | def get_node_index(node_name): 47 | index = 0 48 | for node in node_names: 49 | if node == node_name: 50 | return index 51 | 52 | index +=1 53 | 54 | return -1 55 | 56 | def on_open(connection): 57 | connection.channel(on_channel_open) 58 | print("Connection open") 59 | 60 | def on_channel_open(chan): 61 | global connection, channel 62 | chan.confirm_delivery(on_delivery_confirmation) 63 | channel = chan 64 | publish_messages() 65 | 66 | # this is ignoring the posibility of ack + return 67 | # do not use in production code 68 | def on_delivery_confirmation(frame): 69 | global last_ack_time, pending_messages, pos_acks, neg_acks, last_ack, count, total 70 | 71 | if isinstance(frame.method, spec.Basic.Ack) or isinstance(frame.method, spec.Basic.Nack): 72 | if frame.method.multiple == True: 73 | acks = 0 74 | messages_to_remove = [item for item in pending_messages if item <= frame.method.delivery_tag] 75 | for val in messages_to_remove: 76 | try: 77 | pending_messages.remove(val) 78 | except: 79 | print(f"Could not remove multiple flag message: {val}") 80 | acks += 1 81 | else: 82 | try: 83 | pending_messages.remove(frame.method.delivery_tag) 84 | except: 85 | print(f"Could not remove non-multiple flag message: {frame.method.delivery_tag}") 86 | acks = 1 87 | 88 | if isinstance(frame.method, spec.Basic.Ack): 89 | pos_acks += acks 90 | elif isinstance(frame.method, spec.Basic.Nack): 91 | neg_acks += acks 92 | elif isinstance(frame.method, spec.Basic.Return): 93 | print("Undeliverable message") 94 | 95 | 96 | curr_ack = int((pos_acks + neg_acks) / 10000) 97 | if curr_ack > last_ack: 98 | print(f"Pos acks: {pos_acks} Neg acks: {neg_acks}") 99 | last_ack = curr_ack 100 | 101 | if (pos_acks + neg_acks) >= total: 102 | print(f"Final Count => Pos acks: {pos_acks} Neg acks: {neg_acks}") 103 | connection.close() 104 | exit(0) 105 | 106 | def publish_messages(): 107 | global connection, channel, exchange, count, pending_messages, curr_pos, states, state_count, state_index, val, dup_rate, total 108 | 109 | while curr_pos < total: 110 | if channel.is_open: 111 | curr_pos += 1 112 | body = f"{states[state_index]}={val}" 113 | corr_id = str(uuid.uuid4()) 114 | channel.basic_publish(exchange=exchange, 115 | routing_key=states[state_index], 116 | body=body, 117 | properties=pika.BasicProperties(content_type='text/plain', 118 | delivery_mode=2, 119 | correlation_id=corr_id)) 120 | 121 | # potentially send a duplicate if enabled 122 | if dup_rate > 0: 123 | if random.uniform(0, 1) < dup_rate: 124 | channel.basic_publish(exchange=exchange, 125 | routing_key=states[state_index], 126 | body=body, 127 | properties=pika.BasicProperties(content_type='text/plain', 128 | delivery_mode=2, 129 | correlation_id=corr_id)) 130 | 131 | pending_messages.append(curr_pos) 132 | 133 | state_index += 1 134 | if state_index == state_count: 135 | state_index = 0 136 | val += 1 137 | 138 | if curr_pos % 1000 == 0: 139 | if len(pending_messages) > 10000: 140 | #print("Reached in-flight limit, pausing publishing for 2 seconds") 141 | if channel.is_open: 142 | connection.add_timeout(2, publish_messages) 143 | break 144 | 145 | else: 146 | print("Channel closed, ceasing publishing") 147 | break 148 | 149 | def on_close(connection, reason_code, reason_text): 150 | connection.ioloop.stop() 151 | print("Connection closed. Reason: " + reason_text) 152 | 153 | def reconnect(): 154 | print("Reconnect called") 155 | global curr_node 156 | curr_node += 1 157 | if curr_node > 2: 158 | print("Failed to connect. Will retry in 5 seconds") 159 | time.sleep(5) 160 | curr_node = 0 161 | 162 | connect() 163 | 164 | def connect(): 165 | global connection, curr_node, terminate 166 | print("Attempting to connect to " + nodes[curr_node]) 167 | parameters = pika.URLParameters('amqp://jack:jack@' + nodes[curr_node] + ':5672/%2F') 168 | connection = pika.SelectConnection(parameters=parameters, 169 | on_open_callback=on_open, 170 | on_open_error_callback=reconnect, 171 | on_close_callback=on_close) 172 | 173 | try: 174 | connection.ioloop.start() 175 | except KeyboardInterrupt: 176 | connection.close() 177 | connection.ioloop.stop() 178 | terminate = True 179 | except Exception as ex: 180 | template = "An exception of type {0} occurred. Arguments:{1!r}" 181 | message = template.format(type(ex).__name__, ex.args) 182 | print(message) 183 | 184 | print("Disconnected") 185 | 186 | def get_node_ip(node_name): 187 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 188 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 189 | output, error = process.communicate() 190 | ip = output.decode('ascii').replace('\n', '') 191 | return ip 192 | 193 | for node_name in node_names: 194 | nodes.append(get_node_ip(node_name)) 195 | 196 | curr_node = get_node_index(connect_node) 197 | 198 | # keep running until the terminate signal has been received 199 | while terminate == False: 200 | try: 201 | connect() 202 | except Exception as ex: 203 | template = "An exception of type {0} occurred. Arguments:{1!r}" 204 | message = template.format(type(ex).__name__, ex.args) 205 | print(message) 206 | 207 | if terminate == False: 208 | reconnect() 209 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/blockade-files/blockade-3nodes.yml: -------------------------------------------------------------------------------- 1 | containers: 2 | rabbitmq1: 3 | image: rabbitmq:3.7-management 4 | hostname: rabbitmq1 5 | container_name: rabbitmq1 6 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 7 | volumes: { 8 | #"volumes/01/data": "/var/lib/rabbitmq/mnesia", 9 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config" } 10 | expose: [1936,5672,15672] 11 | 12 | rabbitmq2: 13 | image: rabbitmq:3.7-management 14 | hostname: rabbitmq2 15 | container_name: rabbitmq2 16 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 17 | volumes: { 18 | #"volumes/02/data": "/var/lib/rabbitmq/mnesia", 19 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 20 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" } 21 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 22 | expose: [1936,5672,15672] 23 | links: { rabbitmq1: rabbitmq1 } 24 | start_delay: 10 25 | 26 | rabbitmq3: 27 | image: rabbitmq:3.7-management 28 | hostname: rabbitmq3 29 | container_name: rabbitmq3 30 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 31 | volumes: { 32 | # "volumes/03/data": "/var/lib/rabbitmq/mnesia", 33 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 34 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" 35 | } 36 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 37 | expose: [1936,5672,15672] 38 | links: { rabbitmq1: rabbitmq1 } 39 | start_delay: 20 40 | 41 | network: 42 | driver: udn 43 | flaky: 2% 44 | slow: 100ms 50ms 25% distribution normal -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/blockade-files/blockade-6nodes.yml: -------------------------------------------------------------------------------- 1 | containers: 2 | rabbitmq1: 3 | image: rabbitmq:3.7-management 4 | hostname: rabbitmq1 5 | container_name: rabbitmq1 6 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 7 | volumes: { 8 | #"volumes/01/data": "/var/lib/rabbitmq/mnesia", 9 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config" } 10 | expose: [1936,5672,15672] 11 | 12 | rabbitmq2: 13 | image: rabbitmq:3.7-management 14 | hostname: rabbitmq2 15 | container_name: rabbitmq2 16 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 17 | volumes: { 18 | #"volumes/02/data": "/var/lib/rabbitmq/mnesia", 19 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 20 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" } 21 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 22 | expose: [1936,5672,15672] 23 | start_delay: 10 24 | 25 | rabbitmq3: 26 | image: rabbitmq:3.7-management 27 | hostname: rabbitmq3 28 | container_name: rabbitmq3 29 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 30 | volumes: { 31 | # "volumes/03/data": "/var/lib/rabbitmq/mnesia", 32 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 33 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" 34 | } 35 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 36 | expose: [1936,5672,15672] 37 | start_delay: 10 38 | 39 | rabbitmq4: 40 | image: rabbitmq:3.7-management 41 | hostname: rabbitmq4 42 | container_name: rabbitmq4 43 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 44 | volumes: { 45 | # "volumes/03/data": "/var/lib/rabbitmq/mnesia", 46 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 47 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" 48 | } 49 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 50 | expose: [1936,5672,15672] 51 | start_delay: 10 52 | 53 | rabbitmq5: 54 | image: rabbitmq:3.7-management 55 | hostname: rabbitmq5 56 | container_name: rabbitmq5 57 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 58 | volumes: { 59 | # "volumes/03/data": "/var/lib/rabbitmq/mnesia", 60 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 61 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" 62 | } 63 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 64 | expose: [1936,5672,15672] 65 | start_delay: 10 66 | 67 | rabbitmq6: 68 | image: rabbitmq:3.7-management 69 | hostname: rabbitmq6 70 | container_name: rabbitmq6 71 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 72 | volumes: { 73 | # "volumes/03/data": "/var/lib/rabbitmq/mnesia", 74 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 75 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" 76 | } 77 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 78 | expose: [1936,5672,15672] 79 | start_delay: 10 80 | 81 | network: 82 | driver: udn 83 | slow: 10ms -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/blockade.yml: -------------------------------------------------------------------------------- 1 | containers: 2 | rabbitmq1: 3 | image: rabbitmq:3.7-management 4 | hostname: rabbitmq1 5 | container_name: rabbitmq1 6 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 7 | volumes: { 8 | #"volumes/01/data": "/var/lib/rabbitmq/mnesia", 9 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config" } 10 | expose: [1936,5672,15672] 11 | 12 | rabbitmq2: 13 | image: rabbitmq:3.7-management 14 | hostname: rabbitmq2 15 | container_name: rabbitmq2 16 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 17 | volumes: { 18 | #"volumes/02/data": "/var/lib/rabbitmq/mnesia", 19 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 20 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" } 21 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 22 | expose: [1936,5672,15672] 23 | links: { rabbitmq1: rabbitmq1 } 24 | start_delay: 10 25 | 26 | rabbitmq3: 27 | image: rabbitmq:3.7-management 28 | hostname: rabbitmq3 29 | container_name: rabbitmq3 30 | environment: { "RABBITMQ_ERLANG_COOKIE": 12345 } 31 | volumes: { 32 | # "volumes/03/data": "/var/lib/rabbitmq/mnesia", 33 | "./rabbitmq.config": "/etc/rabbitmq/rabbitmq.config", 34 | "./cluster-entrypoint.sh": "/usr/local/bin/cluster-entrypoint.sh" 35 | } 36 | command: sh -c "/usr/local/bin/cluster-entrypoint.sh" 37 | expose: [1936,5672,15672] 38 | links: { rabbitmq1: rabbitmq1 } 39 | start_delay: 20 40 | 41 | network: 42 | driver: udn 43 | flaky: 2% 44 | slow: 100ms 50ms 25% distribution normal -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/cluster-entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | if [ ! -f is-member-of-cluster.txt ]; then 6 | 7 | touch is-member-of-cluster.txt 8 | 9 | # Start RMQ from entry point. 10 | # This will ensure that environment variables passed 11 | # will be honored 12 | /usr/local/bin/docker-entrypoint.sh rabbitmq-server -detached 13 | 14 | # Do the cluster dance 15 | rabbitmqctl stop_app 16 | # Wait a while for the app to really stop 17 | sleep 2s 18 | 19 | rabbitmqctl join_cluster rabbit@rabbitmq1 20 | 21 | # Stop the entire RMQ server. This is done so that we 22 | # can attach to it again, but without the -detached flag 23 | # making it run in the forground 24 | rabbitmqctl stop 25 | 26 | # Wait a while for the app to really stop 27 | sleep 2s 28 | 29 | # Start it 30 | rabbitmq-server 31 | else 32 | rabbitmq-server 33 | 34 | fi 35 | # wipe all data when server stops 36 | #rm -rf /var/lib/rabbitmq/mnesia -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/declare-hashing-infra.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | from pika import spec 4 | import sys 5 | import subprocess 6 | import requests 7 | import json 8 | 9 | def get_node_ip(node_name): 10 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 11 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 12 | output, error = process.communicate() 13 | ip = output.decode('ascii').replace('\n', '') 14 | return ip 15 | 16 | def put_ha_policy(mgmt_node_ip): 17 | r = requests.put('http://' + mgmt_node_ip + ':15672/api/policies/%2F/ha-queues', 18 | data = "{\"pattern\":\"\", \"definition\": {\"ha-mode\":\"exactly\", \"ha-params\": " + rep_factor + " }, \"priority\":0, \"apply-to\": \"queues\"}", 19 | auth=("jack","jack")) 20 | 21 | print(f"Create policy response: {r}") 22 | 23 | exchange_name = sys.argv[1] 24 | queue_prefix = sys.argv[2] 25 | queue_count = int(sys.argv[3]) 26 | rep_factor = sys.argv[4] 27 | purge = sys.argv[5] 28 | 29 | node_ip = get_node_ip("rabbitmq1") 30 | put_ha_policy(node_ip) 31 | 32 | credentials = pika.PlainCredentials('jack', 'jack') 33 | parameters = pika.ConnectionParameters(node_ip, 34 | 5672, 35 | '/', 36 | credentials) 37 | connection = pika.BlockingConnection(parameters) 38 | channel = connection.channel() 39 | 40 | channel.exchange_declare(exchange=exchange_name, exchange_type='x-consistent-hash', durable=True) 41 | print(f"Declared exchange {exchange_name}") 42 | 43 | for i in range(1, queue_count+1): 44 | suffix = f"{i:03}" 45 | queue_name = f"{queue_prefix}{suffix}" 46 | channel.queue_declare(queue=queue_name, durable=True, arguments={"x-queue-mode": "lazy"}) 47 | channel.queue_bind(queue=queue_name, exchange=exchange_name, routing_key="10") 48 | 49 | if purge == "true": 50 | channel.queue_purge(queue_name) 51 | print(f"Declared, bound and purged queue {queue_name}") 52 | else: 53 | print(f"Declared and bound queue {queue_name}") 54 | 55 | channel.close() 56 | connection.close() 57 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/declare-queue.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import pika 3 | from pika import spec 4 | import sys 5 | import subprocess 6 | import requests 7 | import json 8 | 9 | def get_node_ip(node_name): 10 | bash_command = "bash ../cluster/get-node-ip.sh " + node_name 11 | process = subprocess.Popen(bash_command.split(), stdout=subprocess.PIPE) 12 | output, error = process.communicate() 13 | ip = output.decode('ascii').replace('\n', '') 14 | return ip 15 | 16 | def put_ha_policy(mgmt_node_ip): 17 | r = requests.put('http://' + mgmt_node_ip + ':15672/api/policies/%2F/ha-queues', 18 | data = "{\"pattern\":\"\", \"definition\": {\"ha-mode\":\"exactly\", \"ha-params\": " + rep_factor + " }, \"priority\":0, \"apply-to\": \"queues\"}", 19 | auth=("jack","jack")) 20 | 21 | print(f"Create policy response: {r}") 22 | 23 | queue_name = sys.argv[1] 24 | rep_factor = sys.argv[2] 25 | purge = sys.argv[3] 26 | 27 | node_ip = get_node_ip("rabbitmq1") 28 | put_ha_policy(node_ip) 29 | 30 | credentials = pika.PlainCredentials('jack', 'jack') 31 | parameters = pika.ConnectionParameters(node_ip, 32 | 5672, 33 | '/', 34 | credentials) 35 | connection = pika.BlockingConnection(parameters) 36 | channel = connection.channel() 37 | 38 | channel.queue_declare(queue=queue_name, durable=True, arguments={"x-queue-mode": "lazy"}) 39 | print(f"Declared queue {queue_name}") 40 | if purge == "true": 41 | channel.queue_purge(queue_name) 42 | print(f"Purged queue {queue_name}") 43 | 44 | channel.close() 45 | connection.close() 46 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/deploy-cluster.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | cd ../cluster 4 | 5 | if blockade status > /dev/null 2>&1; then 6 | echo Destroying blockade cluster 7 | blockade destroy 8 | fi 9 | 10 | cp ./blockade-files/blockade-$1nodes.yml blockade.yml 11 | 12 | echo Creating blockade cluster 13 | if ! blockade up > /dev/null 2>&1; then 14 | echo Blockade error, aborting test 15 | exit 1 16 | fi 17 | 18 | blockade status 19 | 20 | echo "waiting for all nodes to join cluster" 21 | sleep 10 22 | 23 | echo "enabling Consistent Hash Exchange on all nodes..." 24 | 25 | bash enable-c-hash-ex.sh 26 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/enable-c-hash-ex.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | blockade status | { while read line; \ 4 | do \ 5 | node=$(echo $(echo $line | awk '{ print $1; }')); \ 6 | if [[ $line == rabbit* ]] ; then \ 7 | docker exec $node rabbitmq-plugins enable rabbitmq_consistent_hash_exchange; \ 8 | fi; \ 9 | done; \ 10 | 11 | echo The Consistent Hash Exchange has been enabled on all nodes; } -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/get-node-ip.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $1) -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/kill-and-reset-node.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | CONTAINER_ID=$(blockade status | grep $1 | awk '{ print $2 }') 4 | docker exec $CONTAINER_ID bash -c 'rm -f is-member-of-cluster.txt' 5 | blockade kill $1 -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/kill-node.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | CONTAINER_ID=$(blockade status | grep $1 | awk '{ print $2 }') 4 | docker exec $CONTAINER_ID bash -c 'echo Killing $(ps aux | grep rabbitmq-server | grep -v grep | awk '"'"'{ print $2 }'"'"')' 5 | docker exec $CONTAINER_ID bash -c 'kill -9 $(ps aux | grep rabbitmq-server | grep -v grep | awk '"'"'{ print $2 }'"'"')' -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/rabbitmq.config: -------------------------------------------------------------------------------- 1 | [ 2 | { rabbit, [ 3 | { loopback_users, [ ] }, 4 | { tcp_listeners, [ 5672 ] }, 5 | { ssl_listeners, [ ] }, 6 | { default_pass, <<"jack">> }, 7 | { default_user, <<"jack">> }, 8 | { hipe_compile, false }, 9 | { cluster_partition_handling, ignore }, 10 | { queue_master_locator, <<"min-masters">> }, 11 | { vm_memory_high_watermark, 0.6 } 12 | ] }, 13 | { rabbitmq_management, [ { listener, [ 14 | { port, 15672 }, 15 | { ssl, false } 16 | ] } ] } 17 | ]. -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/restart-node.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | if [ $1 = "rabbitmq1" ]; then 6 | blockade restart rabbitmq1 7 | echo "rabbitmq1 restarted" 8 | else 9 | # other nodes do not have rabbitmq-server as pid 1 and so stopping the container causes an unclean shutdown 10 | # therefore we do a controlled stop first 11 | R2_ID=$(blockade status | grep $1 | awk '{ print $2 }') 12 | docker exec -it $R2_ID rabbitmqctl stop_app 13 | 14 | # restart the container 15 | blockade restart $2 16 | echo "$2 restarted" 17 | fi 18 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/start-node.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | blockade start $1 6 | echo "$1 restarted" 7 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/cluster/stop-remove-all-running-containers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # sometimes blockade can get into a messed up state 4 | # so we just stop all running containers 5 | 6 | docker stop $(docker ps -aq) 7 | docker rm $(docker ps -aq) 8 | docker network prune 9 | -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/data-locality-notes.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Vanlightly/RabbitMq-PoC-Code/408af79eec81e768840710b47d1eb0a6633819aa/ConsistentHashing/RabbitMqSummit/python/data-locality-notes.txt -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/message-ordering-notes.txt: -------------------------------------------------------------------------------- 1 | # Message Ordering 2 | 3 | python declare-queue.py output-seq 2 true 4 | 5 | 6 | ---------------------------------- 7 | ## Competing Consumers 8 | ---------------------------------- 9 | Cluster 10 | Terminal (cluster): python declare-queue.py rabbitmq1 states-all 2 true 11 | 12 | Client 13 | Terminal 1: python send-state-updates-direct.py --queue states-all --msgs 20 --keys 1 14 | Terminal 2-5: python consumer.py --in-queue state-all --out-queue output-seq --min-ms 0 --max-ms 0 --prefetch 1 15 | Terminal 6: python output-consumer.py --queue output-seq 16 | 17 | -------------------------------------- 18 | Partitioned 19 | -------------------------------------- 20 | 21 | Cluster 22 | Terminal: python declare-hashing-infra.py states states 4 2 true 23 | 24 | Client 25 | Terminal 1: python send-state-updates-hash-ex.py --ex states --msgs 20 --keys 5 26 | Terminal 2: python consumer.py --in-queue states001 --out-queue output-seq --min-ms 0 --max-ms 0 --prefetch 1 27 | Terminal 3: python consumer.py --in-queue states002 --out-queue output-seq --min-ms 0 --max-ms 0 --prefetch 1 28 | Terminal 4: python consumer.py --in-queue states003 --out-queue output-seq --min-ms 0 --max-ms 0 --prefetch 1 29 | Terminal 5: python consumer.py --in-queue states004 --out-queue output-seq --min-ms 0 --max-ms 0 --prefetch 1 30 | Terminal 6: python output-consumer.py --queue output-seq 31 | 32 | Note: show the impact on final ordering with a single consumer that is slower than the others 33 | 34 | -------------------------------------- 35 | Hash Exchange 36 | -------------------------------------- 37 | Cluster 38 | Terminal: python declare-hashing-infra.py states states 20 2 true 39 | 40 | Client 41 | Terminal 1: python send-state-updates-hash-ex.py rabbitmq1 6 100 states 42 | Terminal 2: python consumer.py rabbitmq2 states001 10 0 10 43 | Terminal 3: python consumer.py rabbitmq3 states002 10 0 10 44 | Terminal 4: python consumer.py rabbitmq4 states003 10 0 10 45 | Terminal 5: python consumer.py rabbitmq5 states004 10 0 10 46 | Terminal 6: python output-consumer.py rabbitmq3 47 | 48 | Note: show that slowing down a consumer has no affect on final ordering -------------------------------------------------------------------------------- /ConsistentHashing/RabbitMqSummit/python/requirements.txt: -------------------------------------------------------------------------------- 1 | args==0.1.0 2 | blockade==0.4.0 3 | certifi==2018.8.24 4 | chardet==3.0.4 5 | clint==0.4.1 6 | docker==2.1.0 7 | docker-pycreds==0.3.0 8 | Flask==0.10.1 9 | gevent==1.1.1 10 | greenlet==0.4.15 11 | idna==2.7 12 | itsdangerous==0.24 13 | Jinja2==2.10 14 | MarkupSafe==1.0 15 | pika==0.12.0 16 | PyYAML==3.11 17 | requests==2.19.1 18 | six==1.9.0 19 | urllib3==1.23 20 | websocket-client==0.53.0 21 | Werkzeug==0.14.1 22 | 23 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.ConsoleApp/Consumer.cs: -------------------------------------------------------------------------------- 1 | using RabbitMQ.Client; 2 | using RabbitMQ.Client.Events; 3 | using System; 4 | using System.Collections.Generic; 5 | using System.Text; 6 | using System.Threading; 7 | using System.Threading.Tasks; 8 | 9 | namespace RabbitMQTestExamples.ConsoleApp 10 | { 11 | public class Consumer 12 | { 13 | private IMessageProcessor _messageProcessor; 14 | private Task _consumerTask; 15 | 16 | public Consumer(IMessageProcessor messageProcessor) 17 | { 18 | _messageProcessor = messageProcessor; 19 | } 20 | 21 | public void Consume(CancellationToken token, string queueName) 22 | { 23 | _consumerTask = Task.Run(() => 24 | { 25 | var factory = new ConnectionFactory() { HostName = "localhost" }; 26 | using (var connection = factory.CreateConnection()) 27 | { 28 | using (var channel = connection.CreateModel()) 29 | { 30 | channel.QueueDeclare(queue: queueName, 31 | durable: false, 32 | exclusive: false, 33 | autoDelete: false, 34 | arguments: null); 35 | 36 | var consumer = new EventingBasicConsumer(channel); 37 | consumer.Received += (model, ea) => 38 | { 39 | var body = ea.Body; 40 | var message = Encoding.UTF8.GetString(body); 41 | _messageProcessor.ProcessMessage(message); 42 | }; 43 | channel.BasicConsume(queue: queueName, 44 | autoAck: false, 45 | consumer: consumer); 46 | 47 | while (!token.IsCancellationRequested) 48 | Thread.Sleep(1000); 49 | } 50 | } 51 | }); 52 | } 53 | 54 | public void WaitForCompletion() 55 | { 56 | _consumerTask.Wait(); 57 | } 58 | 59 | } 60 | } 61 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.ConsoleApp/IMessageProcessor.cs: -------------------------------------------------------------------------------- 1 | using System; 2 | using System.Collections.Generic; 3 | using System.Text; 4 | 5 | namespace RabbitMQTestExamples.ConsoleApp 6 | { 7 | public interface IMessageProcessor 8 | { 9 | void ProcessMessage(string message); 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.ConsoleApp/Program.cs: -------------------------------------------------------------------------------- 1 | using RabbitMQ.Client; 2 | using RabbitMQ.Client.Events; 3 | using System; 4 | using System.Text; 5 | using System.Threading; 6 | 7 | namespace RabbitMQTestExamples.ConsoleApp 8 | { 9 | public class RealProcessor : IMessageProcessor 10 | { 11 | public void ProcessMessage(string message) 12 | { 13 | Console.WriteLine(message); 14 | } 15 | } 16 | 17 | class Program 18 | { 19 | static void Main(string[] args) 20 | { 21 | try 22 | { 23 | var cts = new CancellationTokenSource(); 24 | var processor = new RealProcessor(); 25 | var consumer = new Consumer(processor); 26 | consumer.Consume(cts.Token, "queueX"); 27 | 28 | Console.WriteLine("Press any key to shutdown"); 29 | Console.ReadKey(); 30 | cts.Cancel(); 31 | consumer.WaitForCompletion(); 32 | Console.WriteLine("Shutdown"); 33 | } 34 | catch(Exception ex) 35 | { 36 | Console.WriteLine($"Fatal error: {ex}"); 37 | } 38 | } 39 | 40 | 41 | } 42 | } 43 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.ConsoleApp/RabbitMQTestExamples.ConsoleApp.csproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Exe 5 | netcoreapp2.0 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.IntegrationTests/FakeProcessor.cs: -------------------------------------------------------------------------------- 1 | using RabbitMQTestExamples.ConsoleApp; 2 | using System; 3 | using System.Collections.Generic; 4 | using System.Text; 5 | 6 | namespace RabbitMQTestExamples.IntegrationTests 7 | { 8 | public class FakeProcessor : IMessageProcessor 9 | { 10 | public List Messages { get; set; } 11 | 12 | public FakeProcessor() 13 | { 14 | Messages = new List(); 15 | } 16 | 17 | public void ProcessMessage(string message) 18 | { 19 | Messages.Add(message); 20 | } 21 | } 22 | } 23 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.IntegrationTests/Helpers/ConnectionKiller.cs: -------------------------------------------------------------------------------- 1 | using Newtonsoft.Json.Linq; 2 | using RabbitMQ.Client; 3 | using System; 4 | using System.Collections.Generic; 5 | using System.IO; 6 | using System.Linq; 7 | using System.Net; 8 | using System.Net.Http; 9 | using System.Net.Http.Headers; 10 | using System.Text; 11 | using System.Text.RegularExpressions; 12 | using System.Threading; 13 | using System.Threading.Tasks; 14 | 15 | namespace RabbitMQTestExamples.IntegrationTests.Helpers 16 | { 17 | public class ConnectionKiller 18 | { 19 | private static HttpClient HttpClient; 20 | 21 | public static void Initialize(string nodeUrl, string username, string password) 22 | { 23 | if (HttpClient == null) 24 | { 25 | var authValue = new AuthenticationHeaderValue("Basic", Convert.ToBase64String(Encoding.UTF8.GetBytes($"{username}:{password}"))); 26 | 27 | HttpClient = new HttpClient(); 28 | HttpClient.BaseAddress = new Uri($"http://{nodeUrl}"); 29 | HttpClient.DefaultRequestHeaders.Authorization = authValue; 30 | } 31 | } 32 | 33 | public static async Task> GetConnectionNamesAsync(int timeoutMs) 34 | { 35 | string responseBody = "[]"; 36 | int counter = 0; 37 | while (responseBody.Equals("[]") && counter < timeoutMs) 38 | { 39 | var response = await HttpClient.GetAsync("api/connections"); 40 | // keep trying until there are connections 41 | if (response.IsSuccessStatusCode) 42 | { 43 | responseBody = await response.Content.ReadAsStringAsync(); 44 | } 45 | 46 | if (responseBody.Equals("[]")) 47 | await Task.Delay(100); 48 | 49 | counter+=100; 50 | } 51 | 52 | if (responseBody.Equals("[]")) 53 | return new List(); 54 | 55 | var json = JArray.Parse(responseBody); 56 | var conn = json.First(); 57 | 58 | var connectionNames = new List(); 59 | while (conn != null) 60 | { 61 | var name = conn["name"]; 62 | connectionNames.Add(name.Value()); 63 | conn = conn.Next; 64 | } 65 | 66 | return connectionNames; 67 | } 68 | 69 | public static async Task ForceCloseConnectionsAsync(List connectionNames) 70 | { 71 | foreach (var name in connectionNames) 72 | await DeleteAsync($"api/connections/{name}"); 73 | } 74 | 75 | private static async Task DeleteAsync(string path) 76 | { 77 | var response = await HttpClient.DeleteAsync(path); 78 | if (response.StatusCode != HttpStatusCode.NoContent) 79 | { 80 | throw new Exception(response.StatusCode.ToString()); 81 | } 82 | } 83 | } 84 | } 85 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.IntegrationTests/Helpers/QueueCreator.cs: -------------------------------------------------------------------------------- 1 | using RabbitMQ.Client; 2 | using System; 3 | using System.Collections.Generic; 4 | using System.Text; 5 | 6 | namespace RabbitMQTestExamples.IntegrationTests.Helpers 7 | { 8 | public class QueueCreator 9 | { 10 | public static void CreateQueueAndBinding(string queueName, string exchangeName, string virtualHost) 11 | { 12 | var connectionFactory = new ConnectionFactory(); 13 | connectionFactory.HostName = "localhost"; 14 | connectionFactory.UserName = "guest"; 15 | connectionFactory.Password = "guest"; 16 | connectionFactory.VirtualHost = virtualHost; 17 | var connection = connectionFactory.CreateConnection(); 18 | var channel = connection.CreateModel(); 19 | channel.ExchangeDeclare(exchangeName, ExchangeType.Fanout); 20 | channel.QueueDeclare(queueName, true, false, false, null); 21 | channel.QueueBind(queueName, exchangeName, ""); 22 | connection.Close(); 23 | } 24 | 25 | public static void CreateQueueAndBinding(string queueName, string exchangeName, string virtualHost, string dlx, int messageTtl) 26 | { 27 | var connectionFactory = new ConnectionFactory(); 28 | connectionFactory.HostName = "localhost"; 29 | connectionFactory.VirtualHost = virtualHost; 30 | var connection = connectionFactory.CreateConnection(); 31 | var channel = connection.CreateModel(); 32 | channel.ExchangeDeclare(exchangeName, ExchangeType.Fanout, true); 33 | 34 | var queueProps = new Dictionary(); 35 | queueProps.Add("x-dead-letter-exchange", dlx); 36 | queueProps.Add("x-message-ttl", messageTtl); 37 | channel.QueueDeclare(queueName, true, false, false, queueProps); 38 | channel.QueueBind(queueName, exchangeName, ""); 39 | connection.Close(); 40 | } 41 | 42 | public static void CreateExchange(string exchangeName, string virtualHost) 43 | { 44 | var connectionFactory = new ConnectionFactory(); 45 | connectionFactory.HostName = "localhost"; 46 | connectionFactory.VirtualHost = virtualHost; 47 | var connection = connectionFactory.CreateConnection(); 48 | var channel = connection.CreateModel(); 49 | channel.ExchangeDeclare(exchangeName, ExchangeType.Fanout); 50 | connection.Close(); 51 | } 52 | } 53 | } 54 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.IntegrationTests/Helpers/QueueDestroyer.cs: -------------------------------------------------------------------------------- 1 | using RabbitMQ.Client; 2 | using System; 3 | using System.Collections.Generic; 4 | using System.Text; 5 | 6 | namespace RabbitMQTestExamples.IntegrationTests.Helpers 7 | { 8 | public class QueueDestroyer 9 | { 10 | public static void DeleteQueue(string queueName, string virtualHost) 11 | { 12 | var connectionFactory = new ConnectionFactory(); 13 | connectionFactory.HostName = "localhost"; 14 | connectionFactory.UserName = "guest"; 15 | connectionFactory.Password = "guest"; 16 | connectionFactory.VirtualHost = virtualHost; 17 | var connection = connectionFactory.CreateConnection(); 18 | var channel = connection.CreateModel(); 19 | channel.QueueDelete(queueName); 20 | connection.Close(); 21 | } 22 | 23 | public static void DeleteExchange(string exchangeName, string virtualHost) 24 | { 25 | var connectionFactory = new ConnectionFactory(); 26 | connectionFactory.HostName = "localhost"; 27 | connectionFactory.UserName = "guest"; 28 | connectionFactory.Password = "guest"; 29 | connectionFactory.VirtualHost = virtualHost; 30 | var connection = connectionFactory.CreateConnection(); 31 | var channel = connection.CreateModel(); 32 | channel.ExchangeDelete(exchangeName); 33 | connection.Close(); 34 | } 35 | } 36 | } 37 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.IntegrationTests/RabbitMQTestExamples.IntegrationTests.csproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | netcoreapp2.0 5 | 6 | false 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.IntegrationTests/TestMessageReceipt.cs: -------------------------------------------------------------------------------- 1 | using RabbitMQTestExamples.ConsoleApp; 2 | using RabbitMQTestExamples.IntegrationTests.Helpers; 3 | using System; 4 | using System.Threading; 5 | using System.Threading.Tasks; 6 | using Xunit; 7 | 8 | namespace RabbitMQTestExamples.IntegrationTests 9 | { 10 | public class TestMessageReceipt 11 | { 12 | [Fact] 13 | public async Task ConnectionKillerExample() 14 | { 15 | ConnectionKiller.Initialize("localhost:15672", "guest", "guest"); 16 | 17 | // ARRANGE 18 | QueueDestroyer.DeleteQueue("queueX", "/"); 19 | var cts = new CancellationTokenSource(); 20 | var fake = new FakeProcessor(); 21 | var myMicroservice = new Consumer(fake); 22 | 23 | // ACT 24 | myMicroservice.Consume(cts.Token, "queueX"); 25 | 26 | // put a break point here and go a look at the connections in the management ui. 27 | 28 | Thread.Sleep(1000); 29 | var connections = await ConnectionKiller.GetConnectionNamesAsync(10000); 30 | await ConnectionKiller.ForceCloseConnectionsAsync(connections); 31 | 32 | // Now go back to the management console and you will see the connection state changes to closed 33 | 34 | // To make a test out of this, create a consumer that has auto recovery enabled and make a test that 35 | // ensures it continues to consume after multiple connection failures 36 | } 37 | 38 | [Fact] 39 | public void If_SendMessageToQueue_ThenConsumerReceiv4es() 40 | { 41 | // ARRANGE 42 | QueueDestroyer.DeleteQueue("queueX", "/"); 43 | var cts = new CancellationTokenSource(); 44 | var fake = new FakeProcessor(); 45 | var myMicroservice = new Consumer(fake); 46 | 47 | // ACT 48 | myMicroservice.Consume(cts.Token, "queueX"); 49 | 50 | var producer = new TestPublisher(); 51 | producer.Publish("queueX", "hello"); 52 | 53 | Thread.Sleep(1000); 54 | cts.Cancel(); 55 | 56 | // ASSERT 57 | Assert.Equal(1, fake.Messages.Count); 58 | Assert.Equal("hello", fake.Messages[0]); 59 | } 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.IntegrationTests/TestPublisher.cs: -------------------------------------------------------------------------------- 1 | using RabbitMQ.Client; 2 | using System; 3 | using System.Collections.Generic; 4 | using System.Text; 5 | 6 | namespace RabbitMQTestExamples.IntegrationTests 7 | { 8 | public class TestPublisher 9 | { 10 | public void Publish(string queueName, string message) 11 | { 12 | var factory = new ConnectionFactory() { HostName = "localhost", UserName="guest", Password="guest" }; 13 | using (var connection = factory.CreateConnection()) 14 | using (var channel = connection.CreateModel()) 15 | { 16 | var body = Encoding.UTF8.GetBytes(message); 17 | 18 | channel.BasicPublish(exchange: "", 19 | routingKey: queueName, 20 | basicProperties: null, 21 | body: body); 22 | } 23 | } 24 | } 25 | } 26 | -------------------------------------------------------------------------------- /IntegrationTesting/RabbitMQTestExamples/RabbitMQTestExamples.sln: -------------------------------------------------------------------------------- 1 |  2 | Microsoft Visual Studio Solution File, Format Version 12.00 3 | # Visual Studio 15 4 | VisualStudioVersion = 15.0.27703.2000 5 | MinimumVisualStudioVersion = 10.0.40219.1 6 | Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RabbitMQTestExamples.ConsoleApp", "RabbitMQTestExamples.ConsoleApp\RabbitMQTestExamples.ConsoleApp.csproj", "{BD661EAE-2E97-4758-AEB3-572A8B543BA0}" 7 | EndProject 8 | Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RabbitMQTestExamples.IntegrationTests", "RabbitMQTestExamples.IntegrationTests\RabbitMQTestExamples.IntegrationTests.csproj", "{2D47722B-88C7-4AD3-AED7-18141C5DA497}" 9 | EndProject 10 | Global 11 | GlobalSection(SolutionConfigurationPlatforms) = preSolution 12 | Debug|Any CPU = Debug|Any CPU 13 | Release|Any CPU = Release|Any CPU 14 | EndGlobalSection 15 | GlobalSection(ProjectConfigurationPlatforms) = postSolution 16 | {BD661EAE-2E97-4758-AEB3-572A8B543BA0}.Debug|Any CPU.ActiveCfg = Debug|Any CPU 17 | {BD661EAE-2E97-4758-AEB3-572A8B543BA0}.Debug|Any CPU.Build.0 = Debug|Any CPU 18 | {BD661EAE-2E97-4758-AEB3-572A8B543BA0}.Release|Any CPU.ActiveCfg = Release|Any CPU 19 | {BD661EAE-2E97-4758-AEB3-572A8B543BA0}.Release|Any CPU.Build.0 = Release|Any CPU 20 | {2D47722B-88C7-4AD3-AED7-18141C5DA497}.Debug|Any CPU.ActiveCfg = Debug|Any CPU 21 | {2D47722B-88C7-4AD3-AED7-18141C5DA497}.Debug|Any CPU.Build.0 = Debug|Any CPU 22 | {2D47722B-88C7-4AD3-AED7-18141C5DA497}.Release|Any CPU.ActiveCfg = Release|Any CPU 23 | {2D47722B-88C7-4AD3-AED7-18141C5DA497}.Release|Any CPU.Build.0 = Release|Any CPU 24 | EndGlobalSection 25 | GlobalSection(SolutionProperties) = preSolution 26 | HideSolutionNode = FALSE 27 | EndGlobalSection 28 | GlobalSection(ExtensibilityGlobals) = postSolution 29 | SolutionGuid = {863159C6-2794-4BA1-85E4-671C5D04A039} 30 | EndGlobalSection 31 | EndGlobal 32 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Jack Vanlightly 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking.sln: -------------------------------------------------------------------------------- 1 |  2 | Microsoft Visual Studio Solution File, Format Version 12.00 3 | # Visual Studio 14 4 | VisualStudioVersion = 14.0.25420.1 5 | MinimumVisualStudioVersion = 10.0.40219.1 6 | Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "RabbitMqMessageTracking", "RabbitMqMessageTracking\RabbitMqMessageTracking.csproj", "{9C8A514B-D5BC-4B2C-AB02-1A63EB835E9A}" 7 | EndProject 8 | Global 9 | GlobalSection(SolutionConfigurationPlatforms) = preSolution 10 | Debug|Any CPU = Debug|Any CPU 11 | Release|Any CPU = Release|Any CPU 12 | EndGlobalSection 13 | GlobalSection(ProjectConfigurationPlatforms) = postSolution 14 | {9C8A514B-D5BC-4B2C-AB02-1A63EB835E9A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU 15 | {9C8A514B-D5BC-4B2C-AB02-1A63EB835E9A}.Debug|Any CPU.Build.0 = Debug|Any CPU 16 | {9C8A514B-D5BC-4B2C-AB02-1A63EB835E9A}.Release|Any CPU.ActiveCfg = Release|Any CPU 17 | {9C8A514B-D5BC-4B2C-AB02-1A63EB835E9A}.Release|Any CPU.Build.0 = Release|Any CPU 18 | EndGlobalSection 19 | GlobalSection(SolutionProperties) = preSolution 20 | HideSolutionNode = FALSE 21 | EndGlobalSection 22 | EndGlobal 23 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/App.config: -------------------------------------------------------------------------------- 1 |  2 | 3 | 4 | 5 | 6 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/BulkMessagePublisher.cs: -------------------------------------------------------------------------------- 1 | using Newtonsoft.Json; 2 | using RabbitMQ.Client; 3 | using RabbitMQ.Client.Events; 4 | using RabbitMQ.Client.Exceptions; 5 | using System; 6 | using System.Collections.Generic; 7 | using System.Linq; 8 | using System.Text; 9 | using System.Threading.Tasks; 10 | 11 | namespace RabbitMqMessageTracking 12 | { 13 | public class BulkMessagePublisher 14 | { 15 | /// 16 | /// Sends the messages and tracks the send status of each message. Any exceptions are controlled and added to the returned IMessageTracker 17 | /// 18 | /// 19 | /// The exchange to send to 20 | /// The routing key, empty string is permitted 21 | /// A list of objects that will be converted to JSON and sent as individual messages 22 | /// Publishing will publish this number of messages at a time and then pause and wait for confirmation of delivery. Once an acknowledgement 23 | /// of each message has been received, or a timeout is reache, the next batch is sent 24 | /// Adds extra guarantees of correct message send status. Confirms can be received out of order. This means that once all 25 | /// messages have been sent the channel can be closed prematurely due to incorrect ordering of confirms. The safety period keeps the channel open for an extra period, just in case we 26 | /// receive more confirms. This safety period is not required when the messageBatchSize is 1 27 | /// A message tracker that provides you with the delivery status (to the exchange and queues - not the consumer) information, including errors that may have occurred 28 | public IMessageTracker SendMessages(string exchange, 29 | string routingKey, 30 | List messages, 31 | int messageBatchSize) 32 | { 33 | var messageTracker = new MessageTracker(messages); 34 | 35 | try 36 | { 37 | SendBatch(exchange, 38 | routingKey, 39 | messageTracker.GetMessageStates(), 40 | messageTracker, 41 | messageBatchSize); 42 | } 43 | catch (Exception ex) 44 | { 45 | messageTracker.RegisterUnexpectedException(ex); 46 | } 47 | 48 | return messageTracker; 49 | } 50 | 51 | /// 52 | /// Sends the messages and tracks the send status of each message. Any exceptions are controlled and added to the returned IMessageTracker. 53 | /// Additionally, this overload provides retries. 54 | /// 55 | /// 56 | /// The exchange to send to 57 | /// The routing key, empty string is permitted 58 | /// A list of objects that will be converted to JSON and sent as individual messages 59 | /// The number of retries to perform. If you set it to 3 for example, then up to 4 attempts are made in total 60 | /// Milliseconds between each attempt 61 | /// Publishing will publish this number of messages at a time and then pause and wait for confirmation of delivery. Once an acknowledgement 62 | /// of each message has been received, or a timeout is reache, the next batch is sent 63 | /// Adds extra guarantees of correct message send status. Confirms can be received out of order. This means that once all 64 | /// messages have been sent the channel can be closed prematurely due to incorrect ordering of confirms. The safety period keeps the channel open for an extra period, just in case we 65 | /// receive more confirms. This safety period is not required when the messageBatchSize is 1 66 | /// A message tracker that provides you with the delivery status (to the exchange and queues - not the consumer) information, including errors that may have occurred 67 | public async Task> SendBatchWithRetryAsync(string exchange, 68 | string routingKey, 69 | List messages, 70 | byte retryLimit, 71 | short retryPeriodMs, 72 | int messageBatchSize) 73 | { 74 | var messageTracker = new MessageTracker(messages); 75 | 76 | try 77 | { 78 | messageTracker = await SendBatchWithRetryAsync(exchange, 79 | routingKey, 80 | messageTracker.GetMessageStates(), 81 | messageTracker, 82 | retryLimit, 83 | retryPeriodMs, 84 | 1, 85 | messageBatchSize).ConfigureAwait(false); 86 | } 87 | catch (Exception ex) 88 | { 89 | messageTracker.RegisterUnexpectedException(ex); 90 | } 91 | 92 | return messageTracker; 93 | } 94 | 95 | private async Task> SendBatchWithRetryAsync(string exchange, 96 | string routingKey, 97 | List> outgoingMessages, 98 | MessageTracker messageTracker, 99 | byte retryLimit, 100 | short retryPeriodMs, 101 | byte attempt, 102 | int messageBatchSize) 103 | { 104 | Console.WriteLine("Making attempt #" + attempt); 105 | 106 | try 107 | { 108 | SendBatch(exchange, 109 | routingKey, 110 | outgoingMessages, 111 | messageTracker, 112 | messageBatchSize); 113 | } 114 | catch (Exception ex) 115 | { 116 | messageTracker.RegisterUnexpectedException(ex); 117 | } 118 | 119 | if (messageTracker.ShouldRetry() && (attempt - 1) <= retryLimit) 120 | { 121 | attempt++; 122 | 123 | Console.WriteLine("Will make attempt #" + attempt + " in " + retryPeriodMs + "ms"); 124 | await Task.Delay(retryPeriodMs).ConfigureAwait(false); 125 | 126 | // delivery tags are reset on a new channel so we need to get a cloned tracker with: 127 | // - an empty delivert tag dictionary 128 | // - acknowledgement flag set to false on all message states that we will retry 129 | // we also get the payloads that can be retried and then do another batch send with just these 130 | var newMessageTracker = messageTracker.GetCloneWithResetAcknowledgements(); 131 | var retryablePayloads = messageTracker.GetRetryableMessages(); 132 | 133 | return await SendBatchWithRetryAsync(exchange, 134 | routingKey, 135 | retryablePayloads, 136 | newMessageTracker, 137 | retryLimit, 138 | retryPeriodMs, 139 | attempt, 140 | messageBatchSize).ConfigureAwait(false); 141 | } 142 | else 143 | { 144 | return messageTracker; 145 | } 146 | } 147 | 148 | private void SendBatch(string exchange, 149 | string routingKey, 150 | List> messageStates, 151 | MessageTracker messageTracker, 152 | int messageBatchSize) 153 | { 154 | messageTracker.AttemptsMade++; 155 | 156 | var factory = new ConnectionFactory() { HostName = "localhost" }; 157 | factory.AutomaticRecoveryEnabled = false; 158 | 159 | using (var connection = factory.CreateConnection()) 160 | { 161 | using (var channel = connection.CreateModel()) 162 | { 163 | channel.ConfirmSelect(); 164 | channel.BasicAcks += (o, a) => AckCallback(a, messageTracker); 165 | channel.BasicNacks += (o, a) => NackCallback(a, messageTracker); 166 | channel.BasicReturn += (o, a) => ReturnedCallback(a, messageTracker); 167 | channel.ModelShutdown += (o, a) => ModelShutdown(a, messageTracker); 168 | 169 | int counter = 0; 170 | foreach (var messageState in messageStates) 171 | { 172 | counter++; 173 | // create the RabbitMq message from the MessagePayload (the Order class) 174 | var messageJson = JsonConvert.SerializeObject(messageState.MessagePayload); 175 | var body = Encoding.UTF8.GetBytes(messageJson); 176 | var properties = channel.CreateBasicProperties(); 177 | properties.Persistent = true; 178 | properties.MessageId = messageState.MessageId; 179 | properties.Headers = new Dictionary(); 180 | 181 | if (messageState.SendCount > 0) 182 | properties.Headers.Add("republished", true); 183 | 184 | // get the next sequence number (delivery tag) and register it with this MessageState object 185 | var deliveryTag = channel.NextPublishSeqNo; 186 | messageTracker.SetDeliveryTag(deliveryTag, messageState); 187 | messageState.Status = SendStatus.PendingResponse; 188 | messageState.SendCount++; 189 | 190 | // send the message 191 | try 192 | { 193 | channel.BasicPublish(exchange: exchange, 194 | routingKey: routingKey, 195 | basicProperties: properties, 196 | body: body, 197 | mandatory: true); 198 | 199 | if (counter % messageBatchSize == 0) 200 | channel.WaitForConfirms(TimeSpan.FromMinutes(1)); 201 | } 202 | catch (OperationInterruptedException ex) 203 | { 204 | if (ex.ShutdownReason.ReplyCode == 404) 205 | messageTracker.SetStatus(messageState.MessageId, SendStatus.NoExchangeFound, ex.Message); 206 | else 207 | messageTracker.SetStatus(messageState.MessageId, SendStatus.Failed, ex.Message); 208 | } 209 | catch (Exception ex) 210 | { 211 | messageTracker.SetStatus(messageState.MessageId, SendStatus.Failed, ex.Message); 212 | } 213 | 214 | if (channel.IsClosed || messageTracker.PublishingInterrupted) 215 | return; 216 | } 217 | 218 | channel.WaitForConfirms(TimeSpan.FromMinutes(1)); 219 | } 220 | } // already disposed exception here 221 | } 222 | 223 | private void AckCallback(BasicAckEventArgs ea, MessageTracker messageTracker) 224 | { 225 | if (ea.Multiple) 226 | messageTracker.SetMultipleStatus(ea.DeliveryTag, SendStatus.Success); 227 | else 228 | messageTracker.SetStatus(ea.DeliveryTag, SendStatus.Success); 229 | } 230 | 231 | private void NackCallback(BasicNackEventArgs ea, MessageTracker messageTracker) 232 | { 233 | if (ea.Multiple) 234 | messageTracker.SetMultipleStatus(ea.DeliveryTag, SendStatus.Failed); 235 | else 236 | messageTracker.SetStatus(ea.DeliveryTag, SendStatus.Failed); 237 | } 238 | 239 | private void ReturnedCallback(BasicReturnEventArgs ea, MessageTracker messageTracker) 240 | { 241 | messageTracker.SetStatus(ea.BasicProperties.MessageId, 242 | SendStatus.Unroutable, 243 | string.Format("Reply Code: {0} Reply Text: {1}", ea.ReplyCode, ea.ReplyText)); 244 | } 245 | 246 | private void ModelShutdown(ShutdownEventArgs ea, MessageTracker messageTracker) 247 | { 248 | if (ea.ReplyCode != 200) 249 | messageTracker.RegisterChannelClosed("Reply Code: " + ea.ReplyCode + " Reply Text: " + ea.ReplyText); 250 | } 251 | } 252 | } 253 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/IMessageState.cs: -------------------------------------------------------------------------------- 1 | using System; 2 | using System.Collections.Generic; 3 | using System.Linq; 4 | using System.Text; 5 | using System.Threading.Tasks; 6 | 7 | namespace RabbitMqMessageTracking 8 | { 9 | public interface IMessageState 10 | { 11 | T MessagePayload { get; } 12 | SendStatus Status { get; } 13 | string Description { get; } 14 | string MessageId { get; } 15 | int SendCount { get; } 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/IMessageTracker.cs: -------------------------------------------------------------------------------- 1 | using System; 2 | using System.Collections.Generic; 3 | using System.Linq; 4 | using System.Text; 5 | using System.Threading.Tasks; 6 | 7 | namespace RabbitMqMessageTracking 8 | { 9 | public interface IMessageTracker 10 | { 11 | List> GetMessageStates(); 12 | bool PublishingInterrupted { get; } 13 | string InterruptionReason { get; } 14 | Exception UnexpectedException { get; } 15 | int AttemptsMade { get; } 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/MessageState.cs: -------------------------------------------------------------------------------- 1 | using System; 2 | using System.Collections.Generic; 3 | using System.Linq; 4 | using System.Text; 5 | using System.Threading.Tasks; 6 | 7 | namespace RabbitMqMessageTracking 8 | { 9 | public class MessageState : IMessageState 10 | { 11 | public MessageState(T payload) 12 | { 13 | MessagePayload = payload; 14 | MessageId = Guid.NewGuid().ToString(); 15 | } 16 | 17 | public T MessagePayload { get; set; } 18 | public SendStatus Status { get; set; } 19 | public bool Acknowledged { get; set; } 20 | public ulong SequenceNumber { get; set; } 21 | public string Description { get; set; } 22 | public string MessageId { get; set; } 23 | public int SendCount { get; set; } 24 | } 25 | } 26 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/MessageTracker.cs: -------------------------------------------------------------------------------- 1 | using System; 2 | using System.Collections.Concurrent; 3 | using System.Collections.Generic; 4 | using System.Linq; 5 | using System.Text; 6 | using System.Threading; 7 | using System.Threading.Tasks; 8 | 9 | namespace RabbitMqMessageTracking 10 | { 11 | public class MessageTracker : IMessageTracker 12 | { 13 | // To keep all messages to be sent 14 | private List> _statesMaster; 15 | 16 | // For high performance access based on delivery tag (sequence number) 17 | private ConcurrentDictionary> _statesByDeliveryTag; 18 | 19 | // For high performance access based on message id 20 | private ConcurrentDictionary> _statesByMessageId; 21 | 22 | 23 | private int _attempt; 24 | private int _messageCount; 25 | private bool _channelClosed; 26 | private string _channelClosedReason; 27 | private Exception _unexpectedException; 28 | 29 | public MessageTracker(List payloads) 30 | { 31 | _statesByDeliveryTag = new ConcurrentDictionary>(); 32 | _statesByMessageId = new ConcurrentDictionary>(); 33 | _statesMaster = new List>(); 34 | 35 | foreach (var payload in payloads) 36 | { 37 | var outgoingMessage = new MessageState(payload); 38 | _statesByMessageId.TryAdd(outgoingMessage.MessageId, outgoingMessage); 39 | _statesMaster.Add(outgoingMessage); 40 | } 41 | } 42 | 43 | private MessageTracker(ConcurrentDictionary> resultsByMessageId, 44 | List> results, 45 | int attemptsMade) 46 | { 47 | _statesByDeliveryTag = new ConcurrentDictionary>(); 48 | _statesByMessageId = resultsByMessageId; 49 | _statesMaster = results; 50 | AttemptsMade = attemptsMade; 51 | } 52 | 53 | public MessageTracker GetCloneWithResetAcknowledgements() 54 | { 55 | // no need to keep messages that will not be retried in 56 | // the message id dictionary 57 | var statesByMessageId = new ConcurrentDictionary>(); 58 | foreach (var key in _statesByMessageId.Keys) 59 | { 60 | var result = _statesByMessageId[key]; 61 | if (CanBeRetried(result.Status)) 62 | { 63 | result.Acknowledged = false; 64 | statesByMessageId.TryAdd(key, result); 65 | } 66 | } 67 | 68 | // reset sequence numbers 69 | foreach (var message in _statesMaster) 70 | message.SequenceNumber = 0; 71 | 72 | return new MessageTracker(statesByMessageId, _statesMaster, AttemptsMade); 73 | } 74 | 75 | public List> GetRetryableMessages() 76 | { 77 | return _statesMaster.Where(x => !CannotBeRetried(x.Status)).ToList(); 78 | } 79 | 80 | private bool CanBeRetried(SendStatus status) 81 | { 82 | return !CannotBeRetried(status); 83 | } 84 | 85 | private bool CannotBeRetried(SendStatus status) 86 | { 87 | return status == SendStatus.Success || status == SendStatus.NoExchangeFound || status == SendStatus.Unroutable; 88 | } 89 | 90 | public void RegisterNewAttempt() 91 | { 92 | _attempt++; 93 | } 94 | 95 | public void SetDeliveryTag(ulong deliveryTag, MessageState outgoingMessage) 96 | { 97 | outgoingMessage.SequenceNumber = deliveryTag; 98 | _statesByDeliveryTag.TryAdd(deliveryTag, outgoingMessage); 99 | } 100 | 101 | public void SetStatus(ulong deliveryTag, SendStatus status) 102 | { 103 | SetStatus(deliveryTag, status, ""); 104 | } 105 | 106 | public void SetStatus(ulong deliveryTag, SendStatus status, string description) 107 | { 108 | var messageState = _statesByDeliveryTag[deliveryTag]; 109 | SetSendStatus(messageState, status, description); 110 | } 111 | 112 | public void SetStatus(string messageId, SendStatus status) 113 | { 114 | SetStatus(messageId, status, ""); 115 | } 116 | 117 | public void SetStatus(string messageId, SendStatus status, string description) 118 | { 119 | var messageState = _statesByMessageId[messageId]; 120 | SetSendStatus(messageState, status, description); 121 | } 122 | 123 | public void SetMultipleStatus(ulong deliveryTag, SendStatus status) 124 | { 125 | var pendingResponse = _statesMaster.Where(x => x.SequenceNumber > 0 126 | && x.SequenceNumber <= deliveryTag 127 | && x.Status == SendStatus.PendingResponse); 128 | 129 | foreach (var pending in pendingResponse) 130 | SetStatus(pending.SequenceNumber, status); 131 | } 132 | 133 | public void RegisterChannelClosed(string reason) 134 | { 135 | _channelClosed = true; 136 | _channelClosedReason = reason; 137 | } 138 | 139 | public void RegisterUnexpectedException(Exception exception) 140 | { 141 | _unexpectedException = exception; 142 | _channelClosed = true; 143 | _channelClosedReason = "Unexpected exception"; 144 | } 145 | 146 | public bool ShouldRetry() 147 | { 148 | return !_statesMaster.All(x => x.Status == SendStatus.Success || x.Status == SendStatus.NoExchangeFound || x.Status == SendStatus.Unroutable); 149 | } 150 | 151 | public List> GetMessageStates() 152 | { 153 | return _statesMaster; 154 | } 155 | 156 | public bool PublishingInterrupted 157 | { 158 | get { return _channelClosed || _unexpectedException != null; } 159 | } 160 | 161 | public string InterruptionReason 162 | { 163 | get { return _channelClosedReason; } 164 | } 165 | 166 | public Exception UnexpectedException 167 | { 168 | get { return _unexpectedException; } 169 | } 170 | 171 | public int AttemptsMade { get; set; } 172 | 173 | private void SetSendStatus(MessageState messageState, SendStatus status, string description) 174 | { 175 | if (status == SendStatus.NoExchangeFound) 176 | { 177 | foreach (var state in _statesMaster) 178 | { 179 | state.Status = status; 180 | state.Acknowledged = true; 181 | } 182 | } 183 | // unroutable messages get a BasicReturn followed by a BasicAck, so we want to ignore that ack 184 | else if (messageState.Status != SendStatus.Unroutable) 185 | { 186 | messageState.Status = status; 187 | messageState.Description = description; 188 | messageState.Acknowledged = true; 189 | } 190 | } 191 | } 192 | } 193 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/Program.cs: -------------------------------------------------------------------------------- 1 | using RabbitMQ.Client; 2 | using System; 3 | using System.Collections.Generic; 4 | using System.Diagnostics; 5 | using System.Linq; 6 | using System.Text; 7 | using System.Threading.Tasks; 8 | 9 | namespace RabbitMqMessageTracking 10 | { 11 | public class Order 12 | { 13 | public int OrderId { get; set; } 14 | public int ClientId { get; set; } 15 | public string ProductCode { get; set; } 16 | public string OfferCode { get; set; } 17 | public int Quantity { get; set; } 18 | public decimal UnitPrice { get; set; } 19 | } 20 | 21 | class Program 22 | { 23 | static void Main(string[] args) 24 | { 25 | MainAsync().Wait(); 26 | } 27 | 28 | static async Task MainAsync() 29 | { 30 | try 31 | { 32 | Console.WriteLine("This code is explained in my blog series on RabbitMq Publishing starting at: http://jack-vanlightly.com/blog/2017/3/11/sending-messages-in-bulk-and-tracking-delivery-status-rabbitmq-publishing-part-2"); 33 | Console.WriteLine("Enter 2 for the code of Part 2"); 34 | Console.WriteLine("Enter 3 for the code of Part 3"); 35 | int part = int.Parse(Console.ReadLine()); 36 | 37 | if (part == 2) 38 | await Part2().ConfigureAwait(false); 39 | else if (part == 3) 40 | await Part3().ConfigureAwait(false); 41 | } 42 | catch(Exception ex) 43 | { 44 | Console.WriteLine(ex.ToString()); 45 | 46 | } 47 | } 48 | 49 | 50 | #region .: Part 2 :. 51 | 52 | private static async Task Part2() 53 | { 54 | SetupPart2(); 55 | 56 | while (true) 57 | { 58 | int messagesBefore = GetMessageCountPart2(); 59 | Console.WriteLine(""); 60 | Console.WriteLine("The topic exchange: \"order\" with queue: \"order.new\" with binding key \"new\" has been created on your local RabbbitMq"); 61 | Console.WriteLine("Enter a number of messages to publish. If any failures occur, 2 retries will be attempted with 1s between attempts"); 62 | int number = int.Parse(Console.ReadLine()); 63 | 64 | var orders = new List(); 65 | var rand = new Random(); 66 | for (int i = 0; i < number; i++) 67 | { 68 | var order = new Order() 69 | { 70 | OrderId = i, 71 | ClientId = rand.Next(1000000), 72 | OfferCode = "badgers", 73 | ProductCode = "HGDHGDF", 74 | Quantity = 10, 75 | UnitPrice = 9.99M 76 | }; 77 | 78 | orders.Add(order); 79 | } 80 | 81 | Console.WriteLine("Enter an exchange: "); 82 | var exchange = Console.ReadLine(); 83 | Console.WriteLine("Enter a routing key: "); 84 | var routingKey = Console.ReadLine(); 85 | Console.WriteLine("Enter a message batch size: "); 86 | var messageBatchSize = int.Parse(Console.ReadLine()); 87 | 88 | var sw = new Stopwatch(); 89 | sw.Start(); 90 | var bulkEventPublisher = new BulkMessagePublisher(); 91 | var messageTracker = await bulkEventPublisher.SendBatchWithRetryAsync(exchange, routingKey, orders, 2, 1000, messageBatchSize); 92 | 93 | sw.Stop(); 94 | Console.WriteLine("Milliseconds elapsed: " + (int)sw.Elapsed.TotalMilliseconds); 95 | 96 | if (messageTracker.PublishingInterrupted) 97 | { 98 | var maxSendCount = messageTracker.GetMessageStates().Max(x => x.SendCount); 99 | Console.WriteLine("Publishing was interrupted, with " + (messageTracker.AttemptsMade - 1) + " retries made"); 100 | Console.WriteLine("Interruption reason: " + messageTracker.InterruptionReason); 101 | Console.WriteLine("Number of republished messages: " + messageTracker.GetMessageStates() 102 | .Count(x => x.SendCount > 1)); 103 | } 104 | 105 | int messagesAfter = GetMessageCountPart2(); 106 | int newMessagesInQueue = messagesAfter - messagesBefore; 107 | int confirmedSuccessCount = messageTracker.GetMessageStates().Count(x => x.Status == SendStatus.Success); 108 | int unackedCount = messageTracker.GetMessageStates().Count(x => x.Status == SendStatus.PendingResponse); 109 | 110 | if (newMessagesInQueue > confirmedSuccessCount + unackedCount) 111 | Console.WriteLine((newMessagesInQueue - (confirmedSuccessCount + unackedCount)) + " duplicate messages created!!!!!!"); 112 | else 113 | Console.WriteLine("No duplicate messages created"); 114 | 115 | Console.WriteLine(""); 116 | Console.WriteLine("Final message status counts:"); 117 | var groupedByStatus = messageTracker.GetMessageStates().GroupBy(x => x.Status); 118 | foreach (var group in groupedByStatus) 119 | { 120 | if (!string.IsNullOrEmpty(group.First().Description)) 121 | Console.WriteLine(group.Key + " " + group.Count() + " : " + group.First().Description); 122 | else 123 | Console.WriteLine(group.Key + " " + group.Count()); 124 | } 125 | } 126 | } 127 | 128 | private static void SetupPart2() 129 | { 130 | DeletePart2(); 131 | DeletePart3(); 132 | 133 | var factory = new ConnectionFactory() { HostName = "localhost" }; 134 | using (var connection = factory.CreateConnection()) 135 | { 136 | using (var channel = connection.CreateModel()) 137 | { 138 | channel.ExchangeDeclare("order", "topic", true); 139 | channel.QueueDeclare("order.new", true, false, false, new Dictionary()); 140 | channel.QueueBind("order.new", "order", "new"); 141 | } 142 | } 143 | } 144 | 145 | private static void DeletePart2() 146 | { 147 | var factory = new ConnectionFactory() { HostName = "localhost" }; 148 | using (var connection = factory.CreateConnection()) 149 | { 150 | using (var channel = connection.CreateModel()) 151 | { 152 | channel.ExchangeDelete("order", false); 153 | channel.QueueDelete("order.new", false); 154 | } 155 | } 156 | } 157 | 158 | private static int GetMessageCountPart2() 159 | { 160 | var factory = new ConnectionFactory() { HostName = "localhost" }; 161 | using (var connection = factory.CreateConnection()) 162 | { 163 | using (var channel = connection.CreateModel()) 164 | { 165 | var declareOk = channel.QueueDeclare("order.new", true, false, false, new Dictionary()); 166 | return (int)declareOk.MessageCount; 167 | } 168 | } 169 | } 170 | 171 | #endregion .: Part 2 :. 172 | 173 | 174 | #region .: Part 3 :. 175 | 176 | private static async Task Part3() 177 | { 178 | SetupPart3(); 179 | 180 | while (true) 181 | { 182 | int messagesBefore = GetOrderQueueMessageCountPart3(); 183 | int unroutableBefore = GetUnroutableOrderQueueMessageCountPart3(); 184 | Console.WriteLine(""); 185 | Console.WriteLine("The topic exchange: \"order\" with queue: \"order.new\" with binding key \"new\" has been created on your local RabbbitMq"); 186 | Console.WriteLine("Enter a number of messages to publish. If any failures occur, 2 retries will be attempted with 1s between attempts"); 187 | int number = int.Parse(Console.ReadLine()); 188 | 189 | var orders = new List(); 190 | var rand = new Random(); 191 | for (int i = 0; i < number; i++) 192 | { 193 | var order = new Order() 194 | { 195 | OrderId = i, 196 | ClientId = rand.Next(1000000), 197 | OfferCode = "badgers", 198 | ProductCode = "HGDHGDF", 199 | Quantity = 10, 200 | UnitPrice = 9.99M 201 | }; 202 | 203 | orders.Add(order); 204 | } 205 | 206 | Console.WriteLine("Enter an exchange: "); 207 | var exchange = Console.ReadLine(); 208 | Console.WriteLine("Enter a routing key: "); 209 | var routingKey = Console.ReadLine(); 210 | Console.WriteLine("Enter a message batch size: "); 211 | var messageBatchSize = int.Parse(Console.ReadLine()); 212 | 213 | var sw = new Stopwatch(); 214 | sw.Start(); 215 | var bulkEventPublisher = new BulkMessagePublisher(); 216 | var messageTracker = await bulkEventPublisher.SendBatchWithRetryAsync(exchange, routingKey, orders, 2, 1000, messageBatchSize); 217 | 218 | sw.Stop(); 219 | Console.WriteLine("Milliseconds elapsed: " + (int)sw.Elapsed.TotalMilliseconds); 220 | 221 | if (messageTracker.PublishingInterrupted) 222 | { 223 | var maxSendCount = messageTracker.GetMessageStates().Max(x => x.SendCount); 224 | Console.WriteLine("Publishing was interrupted, with " + (messageTracker.AttemptsMade - 1) + " retries made"); 225 | Console.WriteLine("Interruption reason: " + messageTracker.InterruptionReason); 226 | Console.WriteLine("Number of republished messages: " + messageTracker.GetMessageStates() 227 | .Count(x => x.SendCount > 1)); 228 | } 229 | 230 | int messagesAfter = GetOrderQueueMessageCountPart3(); 231 | int unroutableAfter = GetUnroutableOrderQueueMessageCountPart3(); 232 | 233 | int addedMessages = messagesAfter > messagesBefore ? messagesAfter - messagesBefore : 0; 234 | int unroutableMessages = unroutableAfter - unroutableBefore; 235 | Console.WriteLine(addedMessages + " added to order.new, " + unroutableMessages + " added to order.unroutable"); 236 | 237 | Console.WriteLine(""); 238 | Console.WriteLine("Final message status counts:"); 239 | var groupedByStatus = messageTracker.GetMessageStates().GroupBy(x => x.Status); 240 | foreach (var group in groupedByStatus) 241 | { 242 | if (!string.IsNullOrEmpty(group.First().Description)) 243 | Console.WriteLine(group.Key + " " + group.Count() + " : " + group.First().Description); 244 | else 245 | Console.WriteLine(group.Key + " " + group.Count()); 246 | } 247 | } 248 | } 249 | 250 | private static void DeletePart3() 251 | { 252 | var factory = new ConnectionFactory() { HostName = "localhost" }; 253 | using (var connection = factory.CreateConnection()) 254 | { 255 | using (var channel = connection.CreateModel()) 256 | { 257 | channel.ExchangeDelete("order", false); 258 | channel.QueueDelete("order.new", false); 259 | channel.ExchangeDelete("order.unroutable", false); 260 | channel.QueueDelete("order.unroutable", false); 261 | } 262 | } 263 | } 264 | 265 | private static void SetupPart3() 266 | { 267 | DeletePart2(); 268 | 269 | var factory = new ConnectionFactory() { HostName = "localhost" }; 270 | using (var connection = factory.CreateConnection()) 271 | { 272 | using (var channel = connection.CreateModel()) 273 | { 274 | channel.ExchangeDeclare("order.unroutable", "headers", true); 275 | channel.QueueDeclare("order.unroutable", true, false, false, null); 276 | channel.QueueBind("order.unroutable", "order.unroutable", ""); 277 | 278 | var props = new Dictionary(); 279 | props.Add("alternate-exchange", "order.unroutable"); 280 | channel.ExchangeDeclare("order", "topic", true, false, props); 281 | channel.QueueDeclare("order.new", true, false, false, null); 282 | channel.QueueBind("order.new", "order", "new"); 283 | 284 | } 285 | } 286 | } 287 | 288 | private static int GetOrderQueueMessageCountPart3() 289 | { 290 | var factory = new ConnectionFactory() { HostName = "localhost" }; 291 | using (var connection = factory.CreateConnection()) 292 | { 293 | using (var channel = connection.CreateModel()) 294 | { 295 | var declareOk = channel.QueueDeclare("order.new", true, false, false, null); 296 | return (int)declareOk.MessageCount; 297 | } 298 | } 299 | } 300 | 301 | private static int GetUnroutableOrderQueueMessageCountPart3() 302 | { 303 | var factory = new ConnectionFactory() { HostName = "localhost" }; 304 | using (var connection = factory.CreateConnection()) 305 | { 306 | using (var channel = connection.CreateModel()) 307 | { 308 | var declareOk = channel.QueueDeclare("order.unroutable", true, false, false, null); 309 | return (int)declareOk.MessageCount; 310 | } 311 | } 312 | } 313 | 314 | #endregion .: Part 3 :. 315 | } 316 | } 317 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/Properties/AssemblyInfo.cs: -------------------------------------------------------------------------------- 1 | using System.Reflection; 2 | using System.Runtime.CompilerServices; 3 | using System.Runtime.InteropServices; 4 | 5 | // General Information about an assembly is controlled through the following 6 | // set of attributes. Change these attribute values to modify the information 7 | // associated with an assembly. 8 | [assembly: AssemblyTitle("RabbitMqMessageTracking")] 9 | [assembly: AssemblyDescription("")] 10 | [assembly: AssemblyConfiguration("")] 11 | [assembly: AssemblyCompany("Hewlett-Packard Company")] 12 | [assembly: AssemblyProduct("RabbitMqMessageTracking")] 13 | [assembly: AssemblyCopyright("Copyright © Hewlett-Packard Company 2017")] 14 | [assembly: AssemblyTrademark("")] 15 | [assembly: AssemblyCulture("")] 16 | 17 | // Setting ComVisible to false makes the types in this assembly not visible 18 | // to COM components. If you need to access a type in this assembly from 19 | // COM, set the ComVisible attribute to true on that type. 20 | [assembly: ComVisible(false)] 21 | 22 | // The following GUID is for the ID of the typelib if this project is exposed to COM 23 | [assembly: Guid("9c8a514b-d5bc-4b2c-ab02-1a63eb835e9a")] 24 | 25 | // Version information for an assembly consists of the following four values: 26 | // 27 | // Major Version 28 | // Minor Version 29 | // Build Number 30 | // Revision 31 | // 32 | // You can specify all the values or you can default the Build and Revision Numbers 33 | // by using the '*' as shown below: 34 | // [assembly: AssemblyVersion("1.0.*")] 35 | [assembly: AssemblyVersion("1.0.0.0")] 36 | [assembly: AssemblyFileVersion("1.0.0.0")] 37 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/RabbitMqMessageTracking.csproj: -------------------------------------------------------------------------------- 1 |  2 | 3 | 4 | 5 | Debug 6 | AnyCPU 7 | {9C8A514B-D5BC-4B2C-AB02-1A63EB835E9A} 8 | Exe 9 | Properties 10 | RabbitMqMessageTracking 11 | RabbitMqMessageTracking 12 | v4.6.1 13 | 512 14 | true 15 | 16 | 17 | AnyCPU 18 | true 19 | full 20 | false 21 | bin\Debug\ 22 | DEBUG;TRACE 23 | prompt 24 | 4 25 | 26 | 27 | AnyCPU 28 | pdbonly 29 | true 30 | bin\Release\ 31 | TRACE 32 | prompt 33 | 4 34 | 35 | 36 | 37 | ..\packages\Newtonsoft.Json.9.0.1\lib\net45\Newtonsoft.Json.dll 38 | True 39 | 40 | 41 | ..\packages\RabbitMQ.Client.4.1.3\lib\net451\RabbitMQ.Client.dll 42 | True 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 76 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/SendStatus.cs: -------------------------------------------------------------------------------- 1 | using System; 2 | using System.Collections.Generic; 3 | using System.Linq; 4 | using System.Text; 5 | using System.Threading.Tasks; 6 | 7 | namespace RabbitMqMessageTracking 8 | { 9 | public enum SendStatus 10 | { 11 | PendingSend, // have not sent the message yet 12 | PendingResponse, // sent the message, waiting for an ack 13 | Success, // ack received 14 | Failed, // nack received 15 | Unroutable, // message returned 16 | NoExchangeFound // 404 reply code 17 | } 18 | } 19 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/SingleMessagePublisher.cs: -------------------------------------------------------------------------------- 1 | using System; 2 | using System.Collections.Generic; 3 | using System.Linq; 4 | using System.Text; 5 | using System.Threading.Tasks; 6 | 7 | namespace RabbitMqMessageTracking 8 | { 9 | public class SingleMessagePublisher : BulkMessagePublisher 10 | { 11 | public IMessageState Send(string exchange, 12 | string routingKey, 13 | T message) 14 | { 15 | var messageTracker = SendMessages(exchange, routingKey, new List() { message }, 1); 16 | 17 | return messageTracker.GetMessageStates().First(); 18 | } 19 | 20 | public async Task> SendAsyncWithRetry(string exchange, 21 | string routingKey, 22 | T message, 23 | byte retryLimit, 24 | short retryPeriodMs) 25 | { 26 | var messageTracker = await SendBatchWithRetryAsync(exchange, routingKey, new List() { message }, retryLimit, retryPeriodMs, 1); 27 | 28 | return messageTracker.GetMessageStates().First(); 29 | } 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /Publishing/Net461/RabbitMqMessageTracking/RabbitMqMessageTracking/packages.config: -------------------------------------------------------------------------------- 1 |  2 | 3 | 4 | 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # RabbitMq-PoC-Code 2 | Just proof of concept code for working with the C# RabbitMq client. 3 | 4 | Currently I have some example code of how you can track message delivery status (delivery to the exchangeand queues - not consumer), while doing bulk publishing. See my related blog post: http://jack-vanlightly.com/blog/2017/3/11/sending-messages-in-bulk-and-tracking-delivery-status-rabbitmq-publishing-part-2 5 | --------------------------------------------------------------------------------