├── .gitignore ├── LICENSE ├── README ├── log4j-appender-INTRO.txt ├── log4j-example.properties ├── pom.xml └── src ├── main └── java │ └── com │ └── datastax │ └── logging │ └── appender │ └── CassandraAppender.java └── test ├── java └── com │ └── datastax │ └── logging │ └── BasicTest.java └── resources ├── cassandra.yaml ├── log4j.properties └── log4j.xml /.gitignore: -------------------------------------------------------------------------------- 1 | .project 2 | .settings 3 | .classpath 4 | target 5 | tmp 6 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright [yyyy] [name of copyright owner] 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /README: -------------------------------------------------------------------------------- 1 | Cassandra appender for Log4j. 2 | 3 | Motivation and usage. 4 | ---------------------- 5 | Log4j is widely used as logging tool. A Cassandra implementation 6 | used in Asynchronous way as well as having a local FileAppender 7 | would facilitate offline log exploration and analysis. 8 | 9 | An Asynchronous appender wrapping CassandraAppender is encourage 10 | to avoid slowing down the applicaction. 11 | 12 | Adding appender to log4j.properties 13 | ---------------------- 14 | 15 | Add this appender to log4j.properties. See log4j-example.properties. 16 | 17 | # Cassandra Appender 18 | log4j.appender.CASS=com.datastax.logging.appender.CassandraAppender 19 | 20 | # Optional settings. The current values are the default ones 21 | #log4j.appender.CASS.hosts = 127.0.0.1 22 | #log4j.appender.CASS.port = 9160 23 | #log4j.appender.CASS.keyspaceName = "Logging" 24 | #log4j.appender.CASS.columnFamily = "log_entries" 25 | #log4j.appender.CASS.replicationFactor = 1 26 | #log4j.appender.CASS.consistencyLevelWrite = "ONE" 27 | #log4j.appender.CASS.maxBufferedRows = 1 28 | 29 | 30 | How to build: 31 | ---------------------- 32 | 33 | mvn install 34 | 35 | If you are a maven user, maven dependency plugin can resolve the dependencies. 36 | The following is an output of mvn dependency:list 37 | 38 | Dependencies: mvn dependency:list 39 | ---------------------- 40 | 41 | commons-codec:commons-codec:jar:1.3:compile 42 | commons-lang:commons-lang:jar:2.4:compile 43 | commons-logging:commons-logging:jar:1.1.1:compile 44 | javax.servlet:servlet-api:jar:2.5:compile 45 | junit:junit:jar:4.8.1:test 46 | log4j:log4j:jar:1.2.16:compile 47 | org.apache.cassandra:cassandra-thrift:jar:1.0.7:compile 48 | org.apache.httpcomponents:httpclient:jar:4.0.1:compile 49 | org.apache.httpcomponents:httpcore:jar:4.0.1:compile 50 | org.apache.thrift:libthrift:jar:0.6.1:compile 51 | org.slf4j:slf4j-api:jar:1.6.1:compile 52 | org.slf4j:slf4j-log4j12:jar:1.6.1:compile 53 | 54 | mvn dependency:tree: 55 | --------------------- 56 | 57 | +- org.apache.cassandra:cassandra-thrift:jar:1.0.7:compile 58 | | +- commons-lang:commons-lang:jar:2.4:compile 59 | | +- org.slf4j:slf4j-api:jar:1.6.1:compile 60 | | \- org.apache.thrift:libthrift:jar:0.6.1:compile 61 | | +- javax.servlet:servlet-api:jar:2.5:compile 62 | | \- org.apache.httpcomponents:httpclient:jar:4.0.1:compile 63 | | +- org.apache.httpcomponents:httpcore:jar:4.0.1:compile 64 | | +- commons-logging:commons-logging:jar:1.1.1:compile 65 | | \- commons-codec:commons-codec:jar:1.3:compile 66 | +- log4j:log4j:jar:1.2.16:compile 67 | +- org.slf4j:slf4j-log4j12:jar:1.6.1:compile 68 | -------------------------------------------------------------------------------- /log4j-appender-INTRO.txt: -------------------------------------------------------------------------------- 1 | log4j 2 | ----- 3 | Apache log4j is a Java-based logging framework, arguably one of the most 4 | popular. To provide run-time application feedback to users, developers include 5 | log messages of varying levels (from DEBUG to FATAL) into the code. For example: 6 | 7 | Logger logger = Logger.getLogger(SchemaTest.class); 8 | 9 | logger.debug("This is debug message..."); 10 | 11 | logger.warn("This is warning message..."); 12 | 13 | logger.info("This is info message..."); 14 | 15 | try { 16 | ... 17 | } 18 | catch(IOException ioEx) { 19 | logger.error("IOException caught while communicating with server.", ioEx); 20 | } 21 | 22 | The messages that appear in the associated log can then be controlled via 23 | an external properties file and hence can be changed to show more/less without 24 | requiring code changes. Additionally, logging can be controlled down to 25 | granularity of Java package and class levels. So for example, DEBUG messages from 26 | a particular class can be included in the log while messages from others remain at a higher 27 | level (helpful to reduce clutter and identify messages). The 'log' is most 28 | commonly a file and/or stdout, the format, behavior (such as file rolling), etc 29 | of which is also configurable at run-time. Log4j configuration is found in the 30 | file log4j.properties (which is included in the application's classpath). 31 | 32 | Below are sample log messages from starting up a Cassandra node: 33 | 34 | INFO [main] 2012-02-10 09:15:33,112 DatabaseDescriptor.java (line 495) Found table data in data directories. Consider using the CLI to define your schema. 35 | INFO [main] 2012-02-10 09:15:33,135 CommitLog.java (line 166) No commitlog files found; skipping replay 36 | INFO [main] 2012-02-10 09:15:33,150 StorageService.java (line 400) Cassandra version: 1.0.7 37 | INFO [main] 2012-02-10 09:15:33,150 StorageService.java (line 401) Thrift API version: 19.20.0 38 | INFO [main] 2012-02-10 09:15:33,150 StorageService.java (line 414) Loading persisted ring state 39 | INFO [main] 2012-02-10 09:15:33,156 StorageService.java (line 494) Starting up server gossip 40 | INFO [main] 2012-02-10 09:15:33,196 ColumnFamilyStore.java (line 704) Enqueuing flush of Memtable-LocationInfo@1311711893(126/157 serialized/live bytes, 3 ops) 41 | INFO [FlushWriter:1] 2012-02-10 09:15:33,198 Memtable.java (line 246) Writing Memtable-LocationInfo@1311711893(126/157 serialized/live bytes, 3 ops) 42 | INFO [FlushWriter:1] 2012-02-10 09:15:33,347 Memtable.java (line 283) Completed flushing /tmp/cassandra/data/system/LocationInfo-hc-1-Data.db (234 bytes) 43 | INFO [main] 2012-02-10 09:15:33,389 MessagingService.java (line 269) Starting Messaging Service on port 7000 44 | INFO [main] 2012-02-10 09:15:33,400 StorageService.java (line 603) Saved token not found. Using 0 from configuration 45 | INFO [main] 2012-02-10 09:15:33,401 ColumnFamilyStore.java (line 704) Enqueuing flush of Memtable-LocationInfo@169988667(38/47 serialized/live bytes, 2 ops) 46 | INFO [FlushWriter:1] 2012-02-10 09:15:33,401 Memtable.java (line 246) Writing Memtable-LocationInfo@169988667(38/47 serialized/live bytes, 2 ops) 47 | INFO [FlushWriter:1] 2012-02-10 09:15:33,434 Memtable.java (line 283) Completed flushing /tmp/cassandra/data/system/LocationInfo-hc-2-Data.db (148 bytes) 48 | INFO [main] 2012-02-10 09:15:33,436 StorageService.java (line 988) Node /127.0.0.1 state jump to normal 49 | INFO [main] 2012-02-10 09:15:33,437 StorageService.java (line 618) Bootstrap/Replace/Move completed! Now serving reads. 50 | INFO [main] 2012-02-10 09:15:33,438 Mx4jTool.java (line 72) Will not load MX4J, mx4j-tools.jar is not in the classpath 51 | INFO [main] 2012-02-10 09:15:33,505 DseDaemon.java (line 215) Binding thrift service to /0.0.0.0:9160 52 | 53 | Appenders 54 | --------- 55 | Log4j Appenders provide the means by which messages are written to a particular 56 | destination type (e.g. file). Log messages are derived from log events, 57 | formatted as specified in the log4j.properties file (assuming the respective 58 | Appender honors formatting). In the above example messages appear the level, 59 | the thread name, the message timestamp, the source code file (if available) and 60 | line number, and the log message. 61 | 62 | CassandraAppender 63 | ----------------- 64 | The CassandraAppender provides the capability to store log4j messages in a 65 | Cassandra column family. Once the cassandra-log4j-appender and cassandra-thrift 66 | jar files have been included in the application classpath, enabling this 67 | Appender is as simple as adding it to the log4j.properties file: 68 | 69 | # Cassandra Appender 70 | log4j.appender.CASS=com.datastax.logging.appender.CassandraAppender 71 | log4j.appender.CASS.hosts = 127.0.0.1 72 | log4j.appender.CASS.port = 9160 73 | #log4j.appender.CASS.keyspaceName = "Logging" 74 | #log4j.appender.CASS.columnFamily = "log_entries" 75 | #log4j.appender.CASS.replicationFactor = 1 76 | #log4j.appender.CASS.consistencyLevelWrite = "ONE" 77 | #log4j.appender.CASS.maxBufferedRows = 256 78 | 79 | log4j.logger.com.foo.bar= INFO, CASS 80 | 81 | The first non-comment line specifies the CassandraAppender class, assigning it 82 | the alias CASS, through which it is referenced in the last line. This last line 83 | indicates that all log messages of level INFO and higher generated from 84 | the classes and sub-packages of the package com.foo.bar are to be sent to the 85 | CassandraAppender. The lines in between specify configuration parameters to be 86 | consumed by the CassandraAppender (those preceded with a hash sign are commented 87 | out, included for reference and to demonstrate the default values). So by 88 | default, the CassandraAppender records log messages in the Column Family 89 | "log_entries" in the "Logging" keyspace. The definition of this 90 | Column Family is as follows: 91 | 92 | cqlsh:Logging> describe columnfamily log_entries; 93 | 94 | CREATE COLUMNFAMILY log_entries ( 95 | KEY uuid PRIMARY KEY, 96 | app_start_time bigint, 97 | app_name text, 98 | host_name text, 99 | host_ip text, 100 | class_name text, 101 | file_name text, 102 | level text, 103 | line_number text, 104 | log_timestamp bigint, 105 | logger_class_name text, 106 | message text, 107 | method_name text, 108 | ndc text, 109 | thread_name text, 110 | throwable_str_rep text 111 | ) WITH 112 | comment='' AND 113 | comparator=text AND 114 | row_cache_provider='ConcurrentLinkedHashCacheProvider' AND 115 | key_cache_size=200000.000000 AND 116 | row_cache_size=0.000000 AND 117 | read_repair_chance=1.000000 AND 118 | gc_grace_seconds=864000 AND 119 | default_validation=text AND 120 | min_compaction_threshold=4 AND 121 | max_compaction_threshold=32 AND 122 | row_cache_save_period_in_seconds=0 AND 123 | key_cache_save_period_in_seconds=14400 AND 124 | replication_on_write=True; 125 | 126 | If they do not exist, the CassandraAppender will create the Keyspace and/or the 127 | Column Family upon initialization. 128 | 129 | Example 130 | ------- 131 | Consider the following log snippet: 132 | 133 | 09:20:55,470 WARN SchemaTest:68 - This is warn message #163 134 | 09:20:55,470 INFO SchemaTest:71 - This is info message #489 135 | 09:20:55,471 ERROR SchemaTest:59 - Test exception. 136 | java.io.IOException: Danger Will Robinson, Danger! 137 | at com.datastax.logging.SchemaTest.testSavedEntries(SchemaTest.java:58) 138 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 139 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 140 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 141 | at java.lang.reflect.Method.invoke(Method.java:597) 142 | at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) 143 | at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) 144 | at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) 145 | at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) 146 | at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) 147 | at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) 148 | at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) 149 | at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) 150 | at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) 151 | at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) 152 | at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) 153 | at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) 154 | at org.junit.runners.ParentRunner.run(ParentRunner.java:236) 155 | at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:35) 156 | at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:115) 157 | at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:97) 158 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 159 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 160 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 161 | at java.lang.reflect.Method.invoke(Method.java:597) 162 | at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103) 163 | at $Proxy0.invoke(Unknown Source) 164 | at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150) 165 | at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91) 166 | at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69) 167 | 168 | Note that the ERROR entry above includes the stack trace associated with an 169 | Exception. The associated rows in the log_entries Column Family appear as 170 | follows (as queried via cqlsh): 171 | 172 | KEY,eea1256e-db24-4cef-800b-843b3b2fb72c | app_start_time,1328894454774 | level,WARN | log_timestamp,1328894455391 | logger_class_name,org.apache.log4j.Category | message,This is warn message #163 | thread_name,main | 173 | 174 | KEY,f7283a71-32a2-43cf-888a-0c1d3328548d | app_start_time,1328894454774 | level,INFO | log_timestamp,1328894455064 | logger_class_name,org.apache.log4j.Category | message,This is info message #489 | thread_name,main | 175 | 176 | KEY,37ba6b9c-9fd5-4dba-8fbc-51c1696bd235 | app_start_time,1328894454774 | level,ERROR | log_timestamp,1328894455392 | logger_class_name,org.apache.log4j.Category | message,Test exception. | thread_name,main | throwable_str_rep,java.io.IOException: Danger Will Robinson, Danger! at com.datastax.logging.SchemaTest.testSavedEntries(SchemaTest.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:35) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:115) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103) at $Proxy0.invoke(Unknown Source) at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150) at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69) | 177 | 178 | Not all columns have values since the set of values in logging events varies 179 | depending on the manner in which the event was generated (i.e. the particular 180 | logging method that was used in the code) and the configuration. 181 | 182 | Storing logging information in Cassandra provides the capability to do in-depth 183 | analysis via the DataStax Enterprise platform, e.g. via Hadoop and Solr. 184 | -------------------------------------------------------------------------------- /log4j-example.properties: -------------------------------------------------------------------------------- 1 | # Licensed to the Apache Software Foundation (ASF) under one 2 | # or more contributor license agreements. See the NOTICE file 3 | # distributed with this work for additional information 4 | # regarding copyright ownership. The ASF licenses this file 5 | # to you under the Apache License, Version 2.0 (the 6 | # "License"); you may not use this file except in compliance 7 | # with the License. You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # for production, you should probably set the root to INFO 18 | # and the pattern to %c instead of %l. (%l is slower.) 19 | 20 | # output messages into a rolling log file as well as stdout 21 | log4j.rootLogger=INFO,stdout 22 | 23 | # stdout 24 | log4j.appender.stdout=org.apache.log4j.ConsoleAppender 25 | log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 26 | log4j.appender.stdout.layout.ConversionPattern=%-5p [%t]: %m%n 27 | 28 | # Cassandra Appender & options 29 | log4j.appender.CASS=com.datastax.logging.appender.CassandraAppender 30 | 31 | #the following settings are the default values 32 | #log4j.appender.CASS.hosts = localhost 33 | #log4j.appender.CASS.port = 9042 34 | #log4j.appender.CASS.keyspaceName = Logging 35 | #log4j.appender.CASS.columnFamily = log_entries 36 | #log4j.appender.CASS.replicationFactor = 1 37 | #log4j.appender.CASS.consistencyLevelWrite = ONE 38 | 39 | # Enable plaintext authentication (PasswordAuthenticator in cassandra.yaml) 40 | #log4j.appender.CASS.username = "cassandra" 41 | #log4j.appender.CASS.password = "cassandra" 42 | 43 | # Enable SSL with default keystore and truststore. 44 | #log4j.appender.CASS.sslOptions = { } 45 | 46 | # Enable SSL with custom keystore and truststore 47 | #log4j.appender.CASS.sslOptions = { \ 48 | # "ssl.keystore":"/usr/local/dse/resources/dse/conf/.keystore", \ 49 | # "ssl.keystore.password":"cassandra", \ 50 | # "ssl.truststore":"/usr/local/dse/resources/dse/conf/.keystore", \ 51 | # "ssl.truststore.password":"cassandra" \ 52 | #} 53 | 54 | # Custom authProvider 55 | #log4j.appender.CASS.authProviderOptions = { \ 56 | # "auth.class":"org.example.myAuthProvider" \ 57 | # "auth.options":"myOptionStringFormattedSomehow" \ 58 | #} 59 | -------------------------------------------------------------------------------- /pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 15 | 16 | 4.0.0 17 | 18 | com.datastax.logging 19 | cassandra-log4j-appender 20 | jar 21 | 3.1.0 22 | cassandra-log4j-appender 23 | https://github.com/datastax/cassandra-log4j-appender 24 | 25 | 26 | 27 | The Apache Software License, Version 2.0 28 | http://www.apache.org/licenses/LICENSE-2.0.txt 29 | 30 | 31 | 32 | 33 | scm:git:git@github.com:datastax/cassandra-log4j-appender.git 34 | scm:git:git@github.com:datastax/cassandra-log4j-appender.git 35 | https://github.com/datastax/cassandra-log4j-appender 36 | 37 | 38 | 39 | 40 | datastax-public-release 41 | http://mvn.riptano.com/content/repositories/public-releases 42 | 43 | 44 | datastax-public-snapshot 45 | http://mvn.riptano.com/content/repositories/public-snapshots 46 | 47 | 48 | 49 | 50 | 51 | org.slf4j 52 | slf4j-log4j12 53 | 1.6.1 54 | 55 | 56 | 57 | com.datastax.cassandra 58 | cassandra-driver-core 59 | 2.1.0 60 | 61 | 62 | 63 | junit 64 | junit 65 | 4.8.1 66 | test 67 | 68 | 69 | 70 | org.codehaus.jackson 71 | jackson-mapper-asl 72 | 1.9.13 73 | 74 | 75 | 76 | 77 | 78 | 79 | maven-compiler-plugin 80 | 2.3 81 | 82 | 1.6 83 | 1.6 84 | true 85 | true 86 | true 87 | 88 | 89 | 90 | 91 | 92 | -------------------------------------------------------------------------------- /src/main/java/com/datastax/logging/appender/CassandraAppender.java: -------------------------------------------------------------------------------- 1 | package com.datastax.logging.appender; 2 | 3 | import java.io.FileInputStream; 4 | import java.io.IOException; 5 | import java.net.InetAddress; 6 | import java.security.KeyStore; 7 | import java.security.SecureRandom; 8 | import java.util.Map; 9 | import java.util.TreeMap; 10 | import java.util.UUID; 11 | 12 | import com.datastax.driver.core.*; 13 | import org.apache.log4j.AppenderSkeleton; 14 | import org.apache.log4j.Level; 15 | import org.apache.log4j.LogManager; 16 | import org.apache.log4j.helpers.LogLog; 17 | import org.apache.log4j.spi.LocationInfo; 18 | import org.apache.log4j.spi.LoggingEvent; 19 | 20 | import org.codehaus.jackson.map.ObjectMapper; 21 | 22 | import com.datastax.driver.core.policies.RoundRobinPolicy; 23 | 24 | import com.google.common.base.Joiner; 25 | 26 | import javax.net.ssl.KeyManagerFactory; 27 | import javax.net.ssl.SSLContext; 28 | import javax.net.ssl.TrustManagerFactory; 29 | 30 | /** 31 | * Main class that uses Cassandra to store log entries into. 32 | * 33 | */ 34 | public class CassandraAppender extends AppenderSkeleton 35 | { 36 | // Cassandra configuration 37 | private String hosts = "localhost"; 38 | private int port = 9042; //for the binary protocol, 9160 is default for thrift 39 | private String username = ""; 40 | private String password = ""; 41 | private static final String ip = getIP(); 42 | private static final String hostname = getHostName(); 43 | 44 | // Encryption. sslOptions and authProviderOptions are JSON maps requiring Jackson 45 | private static final ObjectMapper jsonMapper = new ObjectMapper(); 46 | private Map sslOptions = null; 47 | private Map authProviderOptions = null; 48 | 49 | // Keyspace/ColumnFamily information 50 | private String keyspaceName = "Logging"; 51 | private String columnFamily = "log_entries"; 52 | private String appName = "default"; 53 | private String replication = "{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"; 54 | private ConsistencyLevel consistencyLevelWrite = ConsistencyLevel.ONE; 55 | 56 | // CF column names 57 | public static final String ID = "key"; 58 | public static final String HOST_IP = "host_ip"; 59 | public static final String HOST_NAME = "host_name"; 60 | public static final String APP_NAME = "app_name"; 61 | public static final String LOGGER_NAME = "logger_name"; 62 | public static final String LEVEL = "level"; 63 | public static final String CLASS_NAME = "class_name"; 64 | public static final String FILE_NAME = "file_name"; 65 | public static final String LINE_NUMBER = "line_number"; 66 | public static final String METHOD_NAME = "method_name"; 67 | public static final String MESSAGE = "message"; 68 | public static final String NDC = "ndc"; 69 | public static final String APP_START_TIME = "app_start_time"; 70 | public static final String THREAD_NAME = "thread_name"; 71 | public static final String THROWABLE_STR = "throwable_str_rep"; 72 | public static final String TIMESTAMP = "log_timestamp"; 73 | 74 | // session state 75 | private PreparedStatement statement; 76 | private volatile boolean initialized = false; 77 | private volatile boolean initializationFailed = false; 78 | private Cluster cluster; 79 | private Session session; 80 | 81 | public CassandraAppender() 82 | { 83 | LogLog.debug("Creating CassandraAppender"); 84 | } 85 | 86 | /** 87 | * {@inheritDoc} 88 | */ 89 | @Override 90 | protected void append(LoggingEvent event) 91 | { 92 | // We have to defer initialization of the client because TTransportFactory 93 | // references some Hadoop classes which can't safely be used until the logging 94 | // infrastructure is fully set up. If we attempt to initialize the client 95 | // earlier, it causes NPE's from the constructor of org.apache.hadoop.conf.Configuration. 96 | if (!initialized) 97 | initClient(); 98 | if (!initializationFailed) 99 | createAndExecuteQuery(event); 100 | } 101 | 102 | //Connect to cassandra, then setup the schema and preprocessed statement 103 | private synchronized void initClient() 104 | { 105 | // We should be able to go without an Atomic variable here. There are two potential problems: 106 | // 1. Multiple threads read intialized=false and call init client. However, the method is 107 | // synchronized so only one will get the lock first, and the others will drop out here. 108 | // 2. One thread reads initialized=true before initClient finishes. This also should not 109 | // happen as the lock should include a memory barrier. 110 | if (initialized || initializationFailed) 111 | return; 112 | 113 | // Just while we initialise the client, we must temporarily 114 | // disable all logging or else we get into an infinite loop 115 | Level globalThreshold = LogManager.getLoggerRepository().getThreshold(); 116 | LogManager.getLoggerRepository().setThreshold(Level.OFF); 117 | 118 | try 119 | { 120 | Cluster.Builder builder = Cluster.builder() 121 | .addContactPoints(hosts.split(",\\s*")) 122 | .withPort(port) 123 | .withLoadBalancingPolicy(new RoundRobinPolicy()); 124 | 125 | // Kerberos provides authentication anyway, so a username and password are superfluous. SSL 126 | // is compatible with either. 127 | boolean passwordAuthentication = !password.equals("") || !username.equals(""); 128 | if (authProviderOptions != null && passwordAuthentication) 129 | throw new IllegalArgumentException("Authentication via both Cassandra usernames and Kerberos " + 130 | "requested."); 131 | 132 | // Encryption 133 | if (authProviderOptions != null) 134 | builder = builder.withAuthProvider(getAuthProvider()); 135 | if (sslOptions != null) 136 | builder = builder.withSSL(getSslOptions()); 137 | if (passwordAuthentication) 138 | builder = builder.withCredentials(username, password); 139 | 140 | cluster = builder.build(); 141 | session = cluster.connect(); 142 | setupSchema(); 143 | setupStatement(); 144 | } 145 | catch (Exception e) 146 | { 147 | LogLog.error("Error ", e); 148 | errorHandler.error("Error setting up cassandra logging schema: " + e); 149 | 150 | //If the user misconfigures the port or something, don't keep failing. 151 | initializationFailed = true; 152 | } 153 | finally 154 | { 155 | //Always reenable logging 156 | LogManager.getLoggerRepository().setThreshold(globalThreshold); 157 | initialized = true; 158 | } 159 | } 160 | 161 | 162 | /** 163 | * Create Keyspace and CF if they do not exist. 164 | */ 165 | private void setupSchema() throws IOException 166 | { 167 | //Create keyspace if necessary 168 | String ksQuery = String.format("CREATE KEYSPACE IF NOT EXISTS \"%s\" WITH REPLICATION = %s;", 169 | keyspaceName, replication); 170 | session.execute(ksQuery); 171 | 172 | //Create table if necessary 173 | String cfQuery = String.format("CREATE TABLE IF NOT EXISTS \"%s\".\"%s\" (%s UUID PRIMARY KEY, " + 174 | "%s text, %s bigint, %s text, %s text, %s text, %s text, %s text," + 175 | "%s text, %s text, %s bigint, %s text, %s text, %s text, %s text," + 176 | "%s text);", 177 | keyspaceName, columnFamily, ID, APP_NAME, APP_START_TIME, CLASS_NAME, 178 | FILE_NAME, HOST_IP, HOST_NAME, LEVEL, LINE_NUMBER, METHOD_NAME, 179 | TIMESTAMP, LOGGER_NAME, MESSAGE, NDC, THREAD_NAME, THROWABLE_STR); 180 | session.execute(cfQuery); 181 | } 182 | 183 | /** 184 | * Setup and preprocess our insert query, so that we can just bind values and send them over the binary protocol 185 | */ 186 | private void setupStatement() 187 | { 188 | //Preprocess our append statement 189 | String insertQuery = String.format("INSERT INTO \"%s\".\"%s\" " + 190 | "(%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s) " + 191 | "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?); ", 192 | keyspaceName, columnFamily, ID, APP_NAME, HOST_IP, HOST_NAME, LOGGER_NAME, 193 | LEVEL, CLASS_NAME, FILE_NAME, LINE_NUMBER, METHOD_NAME, MESSAGE, NDC, 194 | APP_START_TIME, THREAD_NAME, THROWABLE_STR, TIMESTAMP); 195 | 196 | statement = session.prepare(insertQuery); 197 | statement.setConsistencyLevel(ConsistencyLevel.valueOf(consistencyLevelWrite.toString())); 198 | } 199 | 200 | /** 201 | * Send one logging event to Cassandra. We just bind the new values into the preprocessed query 202 | * built by setupStatement 203 | */ 204 | private void createAndExecuteQuery(LoggingEvent event) 205 | { 206 | BoundStatement bound = new BoundStatement(statement); 207 | 208 | // A primary key combination of timestamp/hostname/threadname should be unique as long as the thread names 209 | // are set, but would not be backwards compatible. Do we care? 210 | bound.setUUID(0, UUID.randomUUID()); 211 | 212 | bound.setString(1, appName); 213 | bound.setString(2, ip); 214 | bound.setString(3, hostname); 215 | bound.setString(4, event.getLoggerName()); 216 | bound.setString(5, event.getLevel().toString()); 217 | 218 | LocationInfo locInfo = event.getLocationInformation(); 219 | if (locInfo != null) { 220 | bound.setString(6, locInfo.getClassName()); 221 | bound.setString(7, locInfo.getFileName()); 222 | bound.setString(8, locInfo.getLineNumber()); 223 | bound.setString(9, locInfo.getMethodName()); 224 | } 225 | 226 | bound.setString(10, event.getRenderedMessage()); 227 | bound.setString(11, event.getNDC()); 228 | bound.setLong(12, new Long(LoggingEvent.getStartTime())); 229 | bound.setString(13, event.getThreadName()); 230 | 231 | String[] throwableStrs = event.getThrowableStrRep(); 232 | bound.setString(14, throwableStrs == null ? null : Joiner.on(", ").join(throwableStrs)); 233 | 234 | bound.setLong(15, new Long(event.getTimeStamp())); 235 | session.execute(bound); 236 | } 237 | 238 | /** 239 | * {@inheritDoc} 240 | */ 241 | public void close() 242 | { 243 | session.closeAsync(); 244 | cluster.closeAsync(); 245 | } 246 | 247 | /** 248 | * {@inheritDoc} 249 | * 250 | * @see org.apache.log4j.Appender#requiresLayout() 251 | */ 252 | public boolean requiresLayout() 253 | { 254 | return false; 255 | } 256 | 257 | /** 258 | * Called once all the options have been set. Starts listening for clients 259 | * on the specified socket. 260 | */ 261 | public void activateOptions() 262 | { 263 | // reset(); 264 | } 265 | 266 | // 267 | //Boilerplate from here on out 268 | // 269 | 270 | public String getKeyspaceName() 271 | { 272 | return keyspaceName; 273 | } 274 | 275 | public void setKeyspaceName(String keyspaceName) 276 | { 277 | this.keyspaceName = keyspaceName; 278 | } 279 | 280 | public String getHosts() 281 | { 282 | return hosts; 283 | } 284 | 285 | public void setHosts(String hosts) 286 | { 287 | this.hosts = hosts; 288 | } 289 | 290 | public int getPort() 291 | { 292 | return port; 293 | } 294 | 295 | public void setPort(int port) 296 | { 297 | this.port = port; 298 | } 299 | 300 | public String getUsername() 301 | { 302 | return username; 303 | } 304 | 305 | public void setUsername(String username) 306 | { 307 | this.username = unescape(username); 308 | } 309 | 310 | public String getPassword() 311 | { 312 | return password; 313 | } 314 | 315 | public void setPassword(String password) 316 | { 317 | this.password = unescape(password); 318 | } 319 | 320 | public String getColumnFamily() 321 | { 322 | return columnFamily; 323 | } 324 | 325 | public void setColumnFamily(String columnFamily) 326 | { 327 | this.columnFamily = columnFamily; 328 | } 329 | 330 | public String getReplication() 331 | { 332 | return replication; 333 | } 334 | 335 | public void setReplication(String strategy) 336 | { 337 | replication = unescape(strategy); 338 | } 339 | 340 | private Map parseJsonMap(String options, String type) throws Exception 341 | { 342 | if (options == null) 343 | throw new IllegalArgumentException(type + "Options can't be null."); 344 | 345 | return jsonMapper.readValue(unescape(options), new TreeMap().getClass()); 346 | } 347 | 348 | public void setAuthProviderOptions(String newOptions) throws Exception 349 | { 350 | authProviderOptions = parseJsonMap(newOptions, "authProvider"); 351 | } 352 | 353 | public void setSslOptions(String newOptions) throws Exception 354 | { 355 | sslOptions = parseJsonMap(newOptions, "Ssl"); 356 | } 357 | 358 | public String getConsistencyLevelWrite() 359 | { 360 | return consistencyLevelWrite.toString(); 361 | } 362 | 363 | public void setConsistencyLevelWrite(String consistencyLevelWrite) 364 | { 365 | try { 366 | this.consistencyLevelWrite = ConsistencyLevel.valueOf(unescape(consistencyLevelWrite)); 367 | } 368 | catch (IllegalArgumentException e) { 369 | throw new IllegalArgumentException("Consistency level " + consistencyLevelWrite 370 | + " wasn't found. Available levels: " + Joiner.on(", ").join(ConsistencyLevel.values())); 371 | } 372 | } 373 | 374 | 375 | public String getAppName() 376 | { 377 | return appName; 378 | } 379 | 380 | public void setAppName(String appName) 381 | { 382 | this.appName = appName; 383 | } 384 | 385 | private static String getHostName() 386 | { 387 | String hostname = "unknown"; 388 | 389 | try { 390 | InetAddress addr = InetAddress.getLocalHost(); 391 | hostname = addr.getHostName(); 392 | } catch (Throwable t) { 393 | 394 | } 395 | return hostname; 396 | } 397 | 398 | private static String getIP() 399 | { 400 | String ip = "unknown"; 401 | 402 | try { 403 | InetAddress addr = InetAddress.getLocalHost(); 404 | ip = addr.getHostAddress(); 405 | } catch (Throwable t) { 406 | 407 | } 408 | return ip; 409 | } 410 | 411 | /** 412 | * Strips leading and trailing '"' characters 413 | * 414 | * @param b 415 | * - string to unescape 416 | * @return String - unexspaced string 417 | */ 418 | private static String unescape(String b) 419 | { 420 | if (b.charAt(0) == '\"' && b.charAt(b.length() - 1) == '\"') 421 | b = b.substring(1, b.length() - 1); 422 | return b; 423 | } 424 | 425 | // Create an SSLContext (a container for a keystore and a truststore and their associated options) 426 | // Assumes sslOptions map is not null 427 | private SSLOptions getSslOptions() throws Exception 428 | { 429 | // init trust store 430 | TrustManagerFactory tmf = null; 431 | String truststorePath = sslOptions.get("ssl.truststore"); 432 | String truststorePassword = sslOptions.get("ssl.truststore.password"); 433 | if (truststorePath != null && truststorePassword != null) 434 | { 435 | FileInputStream tsf = new FileInputStream(truststorePath); 436 | KeyStore ts = KeyStore.getInstance("JKS"); 437 | ts.load(tsf, truststorePassword.toCharArray()); 438 | tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm()); 439 | tmf.init(ts); 440 | } 441 | 442 | // init key store 443 | KeyManagerFactory kmf = null; 444 | String keystorePath = sslOptions.get("ssl.keystore"); 445 | String keystorePassword = sslOptions.get("ssl.keystore.password"); 446 | if (keystorePath != null && keystorePassword != null) 447 | { 448 | FileInputStream ksf = new FileInputStream(keystorePath); 449 | KeyStore ks = KeyStore.getInstance("JKS"); 450 | ks.load(ksf, keystorePassword.toCharArray()); 451 | kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm()); 452 | kmf.init(ks, keystorePassword.toCharArray()); 453 | 454 | } 455 | 456 | // init cipher suites 457 | String[] ciphers = SSLOptions.DEFAULT_SSL_CIPHER_SUITES; 458 | if (sslOptions.containsKey("ssl.ciphersuites")) 459 | ciphers = sslOptions.get("ssl.ciphersuits").split(",\\s*"); 460 | 461 | SSLContext ctx = SSLContext.getInstance("SSL"); 462 | ctx.init(kmf == null ? null : kmf.getKeyManagers(), 463 | tmf == null ? null : tmf.getTrustManagers(), 464 | new SecureRandom()); 465 | 466 | return new SSLOptions(ctx, ciphers); 467 | } 468 | 469 | // Load a custom AuthProvider class dynamically. 470 | public AuthProvider getAuthProvider() throws Exception 471 | { 472 | ClassLoader cl = ClassLoader.getSystemClassLoader(); 473 | 474 | if(!authProviderOptions.containsKey("auth.class")) 475 | throw new IllegalArgumentException("authProvider map does not include auth.class."); 476 | Class dap = cl.loadClass(authProviderOptions.get("auth.class")); 477 | 478 | // Perhaps this should be a factory, but it seems easy enough to just have a single string parameter 479 | // which can be encoded however, e.g. another JSON map 480 | if(authProviderOptions.containsKey("auth.options")) 481 | return (AuthProvider)dap.getConstructor(String.class).newInstance(authProviderOptions.get("auth.options")); 482 | else 483 | return (AuthProvider)dap.newInstance(); 484 | } 485 | } 486 | 487 | -------------------------------------------------------------------------------- /src/test/java/com/datastax/logging/BasicTest.java: -------------------------------------------------------------------------------- 1 | package com.datastax.logging; 2 | 3 | import org.apache.log4j.Category; 4 | import org.apache.log4j.Level; 5 | import org.apache.log4j.Logger; 6 | import org.apache.log4j.Priority; 7 | import org.apache.log4j.spi.LoggingEvent; 8 | import org.junit.Before; 9 | import org.junit.Test; 10 | 11 | import com.datastax.logging.appender.CassandraAppender; 12 | 13 | /** 14 | * Basic test for setting appender properties. 15 | */ 16 | public class BasicTest 17 | { 18 | @Before 19 | public void setUp() throws Exception { 20 | // Programmatically set up out appender. 21 | Logger rootLogger = Logger.getRootLogger(); 22 | Logger pkgLogger = rootLogger.getLoggerRepository().getLogger("com.datastax.logging"); 23 | pkgLogger.setLevel(Level.INFO); 24 | CassandraAppender cassApp = new CassandraAppender(); 25 | cassApp.setPort(9042); 26 | cassApp.setAppName("unittest"); 27 | cassApp.activateOptions(); 28 | cassApp.setConsistencyLevelWrite("QUORUM"); 29 | pkgLogger.addAppender(cassApp); 30 | } 31 | 32 | @Test 33 | public void testSettingCorrectConsistencyLevels() 34 | { 35 | CassandraAppender cassApp = new CassandraAppender(); 36 | cassApp.setConsistencyLevelWrite("QUORUM"); 37 | } 38 | 39 | @Test(expected = IllegalArgumentException.class) 40 | public void testSettingWrongConsistencyLevel() 41 | { 42 | new CassandraAppender().setConsistencyLevelWrite("QIORUM"); 43 | } 44 | 45 | @Test 46 | public void testThrowableSuccess() throws Exception 47 | { 48 | CassandraAppender appender = new CassandraAppender(); 49 | LoggingEvent event = new LoggingEvent(BasicTest.class.getName(), 50 | Category.getInstance(BasicTest.class), 51 | Priority.WARN, 52 | "test 12", 53 | new Exception("boom")); 54 | appender.doAppend(event); 55 | } 56 | 57 | @Test 58 | public void testNoThrowableSuccess() throws Exception 59 | { 60 | CassandraAppender appender = new CassandraAppender(); 61 | LoggingEvent event = new LoggingEvent(BasicTest.class.getName(), 62 | Category.getInstance(BasicTest.class), 63 | Priority.WARN, 64 | "test 12", 65 | null); 66 | appender.doAppend(event); 67 | } 68 | } 69 | -------------------------------------------------------------------------------- /src/test/resources/cassandra.yaml: -------------------------------------------------------------------------------- 1 | # Cassandra storage config YAML 2 | 3 | #NOTE !!!!!!!! NOTE 4 | # See http://wiki.apache.org/cassandra/StorageConfiguration for 5 | # full explanations of configuration directives 6 | #NOTE !!!!!!!! NOTE 7 | 8 | # The name of the cluster. This is mainly used to prevent machines in 9 | # one logical cluster from joining another. 10 | cluster_name: 'Test Cluster' 11 | 12 | # You should always specify InitialToken when setting up a production 13 | # cluster for the first time, and often when adding capacity later. 14 | # The principle is that each node should be given an equal slice of 15 | # the token ring; see http://wiki.apache.org/cassandra/Operations 16 | # for more details. 17 | # 18 | # If blank, Cassandra will request a token bisecting the range of 19 | # the heaviest-loaded existing node. If there is no load information 20 | # available, such as is the case with a new cluster, it will pick 21 | # a random token, which will lead to hot spots. 22 | initial_token: 23 | 24 | # Set to true to make new [non-seed] nodes automatically migrate data 25 | # to themselves from the pre-existing nodes in the cluster. Defaults 26 | # to false because you can only bootstrap N machines at a time from 27 | # an existing cluster of N, so if you are bringing up a cluster of 28 | # 10 machines with 3 seeds you would have to do it in stages. Leaving 29 | # this off for the initial start simplifies that. 30 | auto_bootstrap: false 31 | 32 | # See http://wiki.apache.org/cassandra/HintedHandoff 33 | hinted_handoff_enabled: true 34 | 35 | # authentication backend, implementing IAuthenticator; used to identify users 36 | authenticator: org.apache.cassandra.auth.AllowAllAuthenticator 37 | 38 | # authorization backend, implementing IAuthority; used to limit access/provide permissions 39 | authority: org.apache.cassandra.auth.AllowAllAuthority 40 | 41 | # any IPartitioner may be used, including your own as long as it is on 42 | # the classpath. Out of the box, Cassandra provides 43 | # org.apache.cassandra.dht.RandomPartitioner 44 | # org.apache.cassandra.dht.ByteOrderedPartitioner, 45 | # org.apache.cassandra.dht.OrderPreservingPartitioner, and 46 | # org.apache.cassandra.dht.CollatingOrderPreservingPartitioner. 47 | # (CollatingOPP colates according to EN,US rules, not naive byte 48 | # ordering. Use this as an example if you need locale-aware collation.) 49 | partitioner: org.apache.cassandra.dht.RandomPartitioner 50 | 51 | # directories where Cassandra should store data on disk. 52 | data_file_directories: 53 | - tmp/var/lib/cassandra/data 54 | 55 | # commit log 56 | commitlog_directory: tmp/var/lib/cassandra/commitlog 57 | 58 | # saved caches 59 | saved_caches_directory: tmp/var/lib/cassandra/saved_caches 60 | 61 | 62 | # commitlog_sync may be either "periodic" or "batch." 63 | # When in batch mode, Cassandra won't ack writes until the commit log 64 | # has been fsynced to disk. It will wait up to 65 | # CommitLogSyncBatchWindowInMS milliseconds for other writes, before 66 | # performing the sync. 67 | commitlog_sync: periodic 68 | 69 | # the other option is "timed," where writes may be acked immediately 70 | # and the CommitLog is simply synced every commitlog_sync_period_in_ms 71 | # milliseconds. 72 | commitlog_sync_period_in_ms: 10000 73 | 74 | # Addresses of hosts that are deemed contact points. 75 | # Cassandra nodes use this list of hosts to find each other and learn 76 | # the topology of the ring. You must change this if you are running 77 | # multiple nodes! 78 | seed_provider: 79 | - class_name: org.apache.cassandra.locator.SimpleSeedProvider 80 | parameters: 81 | - seeds: "127.0.0.1" 82 | # Access mode. mmapped i/o is substantially faster, but only practical on 83 | # a 64bit machine (which notably does not include EC2 "small" instances) 84 | # or relatively small datasets. "auto", the safe choice, will enable 85 | # mmapping on a 64bit JVM. Other values are "mmap", "mmap_index_only" 86 | # (which may allow you to get part of the benefits of mmap on a 32bit 87 | # machine by mmapping only index files) and "standard". 88 | # (The buffer size settings that follow only apply to standard, 89 | # non-mmapped i/o.) 90 | disk_access_mode: auto 91 | 92 | # Unlike most systems, in Cassandra writes are faster than reads, so 93 | # you can afford more of those in parallel. A good rule of thumb is 2 94 | # concurrent reads per processor core. Increase ConcurrentWrites to 95 | # the number of clients writing at once if you enable CommitLogSync + 96 | # CommitLogSyncDelay. --> 97 | concurrent_reads: 2 98 | concurrent_writes: 4 99 | 100 | # This sets the amount of memtable flush writer threads. These will 101 | # be blocked by disk io, and each one will hold a memtable in memory 102 | # while blocked. If you have a large heap and many data directories, 103 | # you can increase this value for better flush performance. 104 | # By default this will be set to the amount of data directories defined. 105 | #memtable_flush_writers: 1 106 | 107 | # Buffer size to use when performing contiguous column slices. 108 | # Increase this to the size of the column slices you typically perform 109 | sliced_buffer_size_in_kb: 64 110 | 111 | # TCP port, for commands and data 112 | storage_port: 7000 113 | 114 | # Address to bind to and tell other Cassandra nodes to connect to. You 115 | # _must_ change this if you want multiple nodes to be able to 116 | # communicate! 117 | # 118 | # Leaving it blank leaves it up to InetAddress.getLocalHost(). This 119 | # will always do the Right Thing *if* the node is properly configured 120 | # (hostname, name resolution, etc), and the Right Thing is to use the 121 | # address associated with the hostname (it might not be). 122 | # 123 | # Setting this to 0.0.0.0 is always wrong. 124 | listen_address: 127.0.0.1 125 | 126 | # The address to bind the Thrift RPC service to -- clients connect 127 | # here. Unlike ListenAddress above, you *can* specify 0.0.0.0 here if 128 | # you want Thrift to listen on all interfaces. 129 | # 130 | # Leaving this blank has the same effect it does for ListenAddress, 131 | # (i.e. it will be based on the configured hostname of the node). 132 | rpc_address: localhost 133 | # port for Thrift to listen for clients on 134 | rpc_port: 9170 135 | 136 | # enable or disable keepalive on rpc connections 137 | rpc_keepalive: true 138 | 139 | # uncomment to set socket buffer sizes on rpc connections 140 | # rpc_send_buff_size_in_bytes: 141 | # rpc_recv_buff_size_in_bytes: 142 | 143 | # Frame size for thrift (maximum field length). 144 | # 0 disables TFramedTransport in favor of TSocket. This option 145 | # is deprecated; we strongly recommend using Framed mode. 146 | thrift_framed_transport_size_in_mb: 15 147 | 148 | # The max length of a thrift message, including all fields and 149 | # internal thrift overhead. 150 | thrift_max_message_length_in_mb: 16 151 | 152 | # Whether or not to take a snapshot before each compaction. Be 153 | # careful using this option, since Cassandra won't clean up the 154 | # snapshots for you. Mostly useful if you're paranoid when there 155 | # is a data format change. 156 | snapshot_before_compaction: false 157 | 158 | # change this to increase the compaction thread's priority. In java, 1 is the 159 | # lowest priority and that is our default. 160 | # compaction_thread_priority: 1 161 | 162 | # Add column indexes to a row after its contents reach this size. 163 | # Increase if your column values are large, or if you have a very large 164 | # number of columns. The competing causes are, Cassandra has to 165 | # deserialize this much of the row to read a single column, so you want 166 | # it to be small - at least if you do many partial-row reads - but all 167 | # the index data is read for each access, so you don't want to generate 168 | # that wastefully either. 169 | column_index_size_in_kb: 64 170 | 171 | # Size limit for rows being compacted in memory. Larger rows will spill 172 | # over to disk and use a slower two-pass compaction process. A message 173 | # will be logged specifying the row key. 174 | in_memory_compaction_limit_in_mb: 16 175 | 176 | # Time to wait for a reply from other nodes before failing the command 177 | rpc_timeout_in_ms: 10000 178 | 179 | # phi value that must be reached for a host to be marked down. 180 | # most users should never need to adjust this. 181 | # phi_convict_threshold: 8 182 | 183 | # endpoint_snitch -- Set this to a class that implements 184 | # IEndpointSnitch, which will let Cassandra know enough 185 | # about your network topology to route requests efficiently. 186 | # Out of the box, Cassandra provides 187 | # - org.apache.cassandra.locator.SimpleSnitch: 188 | # Treats Strategy order as proximity. This improves cache locality 189 | # when disabling read repair, which can further improve throughput. 190 | # - org.apache.cassandra.locator.RackInferringSnitch: 191 | # Proximity is determined by rack and data center, which are 192 | # assumed to correspond to the 3rd and 2nd octet of each node's 193 | # IP address, respectively 194 | # org.apache.cassandra.locator.PropertyFileSnitch: 195 | # - Proximity is determined by rack and data center, which are 196 | # explicitly configured in cassandra-rack.properties. 197 | endpoint_snitch: org.apache.cassandra.locator.SimpleSnitch 198 | 199 | # dynamic_snitch -- This boolean controls whether the above snitch is 200 | # wrapped with a dynamic snitch, which will monitor read latencies 201 | # and avoid reading from hosts that have slowed (due to compaction, 202 | # for instance) 203 | dynamic_snitch: true 204 | # controls how often to perform the more expensive part of host score 205 | # calculation 206 | dynamic_snitch_update_interval_in_ms: 100 207 | # controls how often to reset all host scores, allowing a bad host to 208 | # possibly recover 209 | dynamic_snitch_reset_interval_in_ms: 600000 210 | # if set greater than zero and read_repair_chance is < 1.0, this will allow 211 | # 'pinning' of replicas to hosts in order to increase cache capacity. 212 | # The badness threshold will control how much worse the pinned host has to be 213 | # before the dynamic snitch will prefer other replicas over it. This is 214 | # expressed as a double which represents a percentage. 215 | dynamic_snitch_badness_threshold: 0.0 216 | 217 | # request_scheduler -- Set this to a class that implements 218 | # RequestScheduler, which will schedule incoming client requests 219 | # according to the specific policy. This is useful for multi-tenancy 220 | # with a single Cassandra cluster. 221 | # NOTE: This is specifically for requests from the client and does 222 | # not affect inter node communication. 223 | # org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place 224 | # org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of 225 | # client requests to a node with a separate queue for each 226 | # request_scheduler_id. The scheduler is further customized by 227 | # request_scheduler_options as described below. 228 | request_scheduler: org.apache.cassandra.scheduler.NoScheduler 229 | 230 | encryption_options: 231 | internode_encryption: none 232 | keystore: conf/.keystore 233 | keystore_password: cassandra 234 | truststore: conf/.truststore 235 | truststore_password: cassandra 236 | # Scheduler Options vary based on the type of scheduler 237 | # NoScheduler - Has no options 238 | # RoundRobin 239 | # - throttle_limit -- The throttle_limit is the number of in-flight 240 | # requests per client. Requests beyond 241 | # that limit are queued up until 242 | # running requests can complete. 243 | # The value of 80 here is twice the number of 244 | # concurrent_reads + concurrent_writes. 245 | # - default_weight -- default_weight is optional and allows for 246 | # overriding the default which is 1. 247 | # - weights -- Weights are optional and will default to 1 or the 248 | # overridden default_weight. The weight translates into how 249 | # many requests are handled during each turn of the 250 | # RoundRobin, based on the scheduler id. 251 | # 252 | # request_scheduler_options: 253 | # throttle_limit: 80 254 | # default_weight: 5 255 | # weights: 256 | # Keyspace1: 1 257 | # Keyspace2: 5 258 | 259 | # request_scheduler_id -- An identifer based on which to perform 260 | # the request scheduling. Currently the only valid option is keyspace. 261 | # request_scheduler_id: keyspace 262 | 263 | # The Index Interval determines how large the sampling of row keys 264 | # is for a given SSTable. The larger the sampling, the more effective 265 | # the index is at the cost of space. 266 | index_interval: 128 267 | 268 | # A ColumnFamily is the Cassandra concept closest to a relational table. 269 | # 270 | # Keyspaces are separate groups of ColumnFamilies. Except in very 271 | # unusual circumstances you will have one Keyspace per application. 272 | # 273 | # Keyspace required parameters: 274 | # - name: name of the keyspace; "system" and "definitions" are 275 | # reserved for Cassandra Internals. 276 | # - replica_placement_strategy: the class that determines how replicas 277 | # are distributed among nodes. Contains both the class as well as 278 | # configuration information. Must extend AbstractReplicationStrategy. 279 | # Out of the box, Cassandra provides 280 | # * org.apache.cassandra.locator.SimpleStrategy 281 | # * org.apache.cassandra.locator.NetworkTopologyStrategy 282 | # * org.apache.cassandra.locator.OldNetworkTopologyStrategy 283 | # 284 | # SimpleStrategy merely places the first 285 | # replica at the node whose token is closest to the key (as determined 286 | # by the Partitioner), and additional replicas on subsequent nodes 287 | # along the ring in increasing Token order. 288 | # 289 | # With NetworkTopologyStrategy, 290 | # for each datacenter, you can specify how many replicas you want 291 | # on a per-keyspace basis. Replicas are placed on different racks 292 | # within each DC, if possible. This strategy also requires rack aware 293 | # snitch, such as RackInferringSnitch or PropertyFileSnitch. 294 | # An example: 295 | # - name: Keyspace1 296 | # replica_placement_strategy: org.apache.cassandra.locator.NetworkTopologyStrategy 297 | # strategy_options: 298 | # DC1 : 3 299 | # DC2 : 2 300 | # DC3 : 1 301 | # 302 | # OldNetworkToplogyStrategy [formerly RackAwareStrategy] 303 | # places one replica in each of two datacenters, and the third on a 304 | # different rack in in the first. Additional datacenters are not 305 | # guaranteed to get a replica. Additional replicas after three are placed 306 | # in ring order after the third without regard to rack or datacenter. 307 | # 308 | # - replication_factor: Number of replicas of each row 309 | # - column_families: column families associated with this keyspace 310 | # 311 | # ColumnFamily required parameters: 312 | # - name: name of the ColumnFamily. Must not contain the character "-". 313 | # - compare_with: tells Cassandra how to sort the columns for slicing 314 | # operations. The default is BytesType, which is a straightforward 315 | # lexical comparison of the bytes in each column. Other options are 316 | # AsciiType, UTF8Type, LexicalUUIDType, TimeUUIDType, LongType, 317 | # and IntegerType (a generic variable-length integer type). 318 | # You can also specify the fully-qualified class name to a class of 319 | # your choice extending org.apache.cassandra.db.marshal.AbstractType. 320 | # 321 | # ColumnFamily optional parameters: 322 | # - keys_cached: specifies the number of keys per sstable whose 323 | # locations we keep in memory in "mostly LRU" order. (JUST the key 324 | # locations, NOT any column values.) Specify a fraction (value less 325 | # than 1) or an absolute number of keys to cache. Defaults to 200000 326 | # keys. 327 | # - rows_cached: specifies the number of rows whose entire contents we 328 | # cache in memory. Do not use this on ColumnFamilies with large rows, 329 | # or ColumnFamilies with high write:read ratios. Specify a fraction 330 | # (value less than 1) or an absolute number of rows to cache. 331 | # Defaults to 0. (i.e. row caching is off by default) 332 | # - comment: used to attach additional human-readable information about 333 | # the column family to its definition. 334 | # - read_repair_chance: specifies the probability with which read 335 | # repairs should be invoked on non-quorum reads. must be between 0 336 | # and 1. defaults to 1.0 (always read repair). 337 | # - gc_grace_seconds: specifies the time to wait before garbage 338 | # collecting tombstones (deletion markers). defaults to 864000 (10 339 | # days). See http://wiki.apache.org/cassandra/DistributedDeletes 340 | # - default_validation_class: specifies a validator class to use for 341 | # validating all the column values in the CF. 342 | # - min_compaction_threshold: the minimum number of SSTables needed 343 | # to start a minor compaction. increasing this will cause minor 344 | # compactions to start less frequently and be more intensive. setting 345 | # this to 0 disables minor compactions. defaults to 4. 346 | # - max_compaction_threshold: the maximum number of SSTables allowed 347 | # before a minor compaction is forced. decreasing this will cause 348 | # minor compactions to start more frequently and be less intensive. 349 | # setting this to 0 disables minor compactions. defaults to 32. 350 | # - row_cache_save_period_in_seconds: number of seconds between saving 351 | # row caches. The row caches can be saved periodically and if one 352 | # exists on startup it will be loaded. 353 | # - key_cache_save_period_in_seconds: number of seconds between saving 354 | # key caches. The key caches can be saved periodically and if one 355 | # exists on startup it will be loaded. 356 | # - memtable_flush_after_mins: The maximum time to leave a dirty table 357 | # unflushed. This should be large enough that it won't cause a flush 358 | # storm of all memtables during periods of inactivity. 359 | # - memtable_throughput_in_mb: The maximum size of the memtable before 360 | # it is flushed. If undefined, 1/8 * heapsize will be used. 361 | # - memtable_operations_in_millions: Number of operations in millions 362 | # before the memtable is flushed. If undefined, throughput / 64 * 0.3 363 | # will be used. 364 | # 365 | # NOTE: this keyspace definition is for demonstration purposes only. 366 | # Cassandra will not load these definitions during startup. See 367 | # http://wiki.apache.org/cassandra/FAQ#no_keyspaces for an explanation. 368 | 369 | -------------------------------------------------------------------------------- /src/test/resources/log4j.properties: -------------------------------------------------------------------------------- 1 | # Licensed to the Apache Software Foundation (ASF) under one 2 | # or more contributor license agreements. See the NOTICE file 3 | # distributed with this work for additional information 4 | # regarding copyright ownership. The ASF licenses this file 5 | # to you under the Apache License, Version 2.0 (the 6 | # "License"); you may not use this file except in compliance 7 | # with the License. You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # for production, you should probably set the root to INFO 18 | # and the pattern to %c instead of %l. (%l is slower.) 19 | 20 | # output messages into a rolling log file as well as stdout 21 | log4j.rootLogger=INFO,stdout 22 | 23 | # stdout 24 | #log4j.appender.stdout=org.apache.log4j.ConsoleAppender 25 | #log4j.appender.stdout.layout=org.apache.log4j.SimpleLayout 26 | log4j.appender.stdout=org.apache.log4j.ConsoleAppender 27 | log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 28 | log4j.appender.stdout.layout.ConversionPattern=%-5p [%t]: %m%n 29 | 30 | 31 | #log4j.category.org.apache=INFO, stdout 32 | log4j.logger.com.datastax.logging= INFO 33 | -------------------------------------------------------------------------------- /src/test/resources/log4j.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | --------------------------------------------------------------------------------