├── NewClustering.wiki
├── Advocacy.wiki
├── NewInternals.wiki
├── NewClients.wiki
├── Tutorials.wiki
├── NewGlossary.wiki
├── NewLinks.wiki
├── TestSuites.wiki
├── NewProposals.wiki
├── NewProtocols.wiki
├── ReleaseNotes128.wiki
├── NewDevelopmentProcess.wiki
├── NewClusterMaint.wiki
├── DevelopmentRepos.wiki
├── ReleaseNotes.wiki
├── SASLHowto.wiki
├── NewCommunication.wiki
├── ReleaseNotes1413.wiki
├── ConventionGuide.wiki
├── PlatformWindows.wiki
├── MonitorLinks.wiki
├── ReleaseNotes1424.wiki
├── ReleaseNotes149.wiki
├── ReleaseNotes1420.wiki
├── ReleaseNotes1419.wiki
├── NewStart.wiki
├── ReleaseNotes1412.wiki
├── ReleaseNotes1421.wiki
├── Tools.wiki
├── BuildFarm.wiki
├── HowToLearnMoreScalability.wiki
├── ReleaseNotes1415.wiki
├── NewInstallFromPackage.wiki
├── ReleaseNotes141rc1.wiki
├── ReleaseNotes141.wiki
├── ReleaseNotes1422.wiki
├── BoothGuide.wiki
├── ReleaseNotes144.wiki
├── MemcachedOffspring.wiki
├── NewInstallFromSource.wiki
├── ReleaseNotes147.wiki
├── ReleaseNotes146rc1.wiki
├── ReleaseNotes147rc1.wiki
├── ReleaseNotes145.wiki
├── ReleaseNotes146.wiki
├── ReleaseNotes127.wiki
├── Resources.wiki
├── ReleaseNotes133.wiki
├── ReleaseNotes1417.wiki
├── ReleaseNotes1414.wiki
├── ReleaseNotes1410.wiki
├── ReleaseNotes143rc1.wiki
├── ReleaseNotes1416.wiki
├── ReleaseNotes143.wiki
├── ReleaseNotes143rc2.wiki
├── ReleaseNotes1418.wiki
├── SASLAuthProtocol.wiki
├── PHPClientComparison.wiki
├── ReleaseNotes1423.wiki
├── ReleaseNotes142rc1.wiki
├── ReleaseNotes142.wiki
├── ProposalTagSupport.wiki
├── NewCommands.wiki
├── NewCommonFeatures.wiki
├── NewHardware.wiki
├── MemoryManagement.wiki
├── ReleaseNotes1411beta1.wiki
├── ReleaseNotes1411rc1.wiki
├── ReleaseNotes148rc1.wiki
├── ReleaseNotes1411.wiki
├── NewUserInternals.wiki
├── ReleaseNotes148.wiki
├── NewConfiguringClient.wiki
├── NewConfiguringServer.wiki
├── NewOverview.wiki
├── TutorialCachingStory.wiki
├── Timeouts.wiki
└── ReleaseNotes132.wiki
/NewClustering.wiki:
--------------------------------------------------------------------------------
1 | #summary Noisy Gaggle
2 |
3 | TODO: Section feels redundant. Remove?
4 |
--------------------------------------------------------------------------------
/Advocacy.wiki:
--------------------------------------------------------------------------------
1 | #summary Memcached advocacy overview
2 |
3 | = Project Advocacy (TENTATIVE NOTES!) =
4 |
5 | * [ConventionGuide]
6 | * [BoothGuide]
--------------------------------------------------------------------------------
/NewInternals.wiki:
--------------------------------------------------------------------------------
1 | #summary Autopsy
2 |
3 | TODO: Describe full slabber details, item sizing, thread local buffers.
4 |
5 | centralized stats
6 | central locks
7 | engine interface
8 | refcounts on items
9 |
10 |
--------------------------------------------------------------------------------
/NewClients.wiki:
--------------------------------------------------------------------------------
1 | #summary Pick Your Poison
2 |
3 | See the [Clients old clients page] for now.
4 |
5 | Please someone help with the new clients page :)
6 |
7 | need to index it other ways. List all the libmemcached clients in one spot, cull any inactive clients, and more commentary on what the others support.
8 |
--------------------------------------------------------------------------------
/Tutorials.wiki:
--------------------------------------------------------------------------------
1 | #summary Tutorials
2 |
3 | == Wiki ==
4 |
5 | * [TutorialCachingStory This is a Story of Caching] - A tutorial in story form that may help you understand how to use memcached.
6 |
7 | == External ==
8 |
9 | * [http://www.majordojo.com/2007/03/memcached-howto.php An introduction to Memcached] (2007)
--------------------------------------------------------------------------------
/NewGlossary.wiki:
--------------------------------------------------------------------------------
1 | #summary Common Confusing Chorus
2 |
3 |
4 |
5 | == Item or Object ==
6 |
7 | Used to refer to a string of bytes stored in a memcached daemon. The bytes are broken up between a "key", and a "value" of variable length.
8 |
9 | == Consistent Hashing ==
10 |
11 | define me.
12 |
--------------------------------------------------------------------------------
/NewLinks.wiki:
--------------------------------------------------------------------------------
1 | #summary Fine, Leave
2 |
3 |
4 |
5 | Various pages about related topics or programs.
6 |
7 | * [Tools Tools for Inspecting Or Managing Memcached]
8 | * [MonitorLinks Tools/Articles specific to monitoring Memcached]
9 | * [Resources Memcached Specific Presentations and Documentation]
10 | * [HowToLearnMoreScalability Learn More About Scalability]
11 |
--------------------------------------------------------------------------------
/TestSuites.wiki:
--------------------------------------------------------------------------------
1 | = QA Development / source code repositories =
2 |
3 | * `make test` from the memcached tarball or git repo
4 | * Standard regression suite for the server.
5 | * http://github.com/victorkirkebo/mctest/tree/master
6 | * Server tests (textual and binary protocol testing)
7 | * http://github.com/dustin/memcached-test
8 | * Client and server test code (used primarily for binary protocol testing)
9 |
--------------------------------------------------------------------------------
/NewProposals.wiki:
--------------------------------------------------------------------------------
1 | #summary I Declare
2 |
3 |
4 |
5 | A recent primary focus proposal are [EngineInterface Storage Engines], which will allow people to use the memcached protocol shell and daemon to do similar activities easily, ideally preventing forking and allowing experimentation with improving existing features.
6 |
7 | There is a very old proposal:
8 |
9 | * [ProposalTagSupport]
10 |
11 |
--------------------------------------------------------------------------------
/NewProtocols.wiki:
--------------------------------------------------------------------------------
1 | #summary Klingon
2 |
3 | Memcached supports two main protocols; the classic ASCII, and the newer binary. It's simpler to write clients and debug problems via the ASCII protocol, but binary affords us many new abilities.
4 |
5 | * [http://github.com/memcached/memcached/blob/master/doc/protocol.txt Text Protocol]
6 | * [BinaryProtocolRevamped Binary Protocol]
7 | * [http://www.slideshare.net/tmaesaka/memcached-binary-protocol-in-a-nutshell-presentation/ Slides on binary protocol ] by Toru Maesaka (2008)
8 |
9 | Further, there are sub protocols and proposals
10 |
11 | * [SASLAuthProtocol SASL Authentication]
12 | * [RangeOps Range operations] - Not to be supported in core, but defined for storage engines and compatible clients.
--------------------------------------------------------------------------------
/ReleaseNotes128.wiki:
--------------------------------------------------------------------------------
1 | #summary post-facto release notes for 1.2.8
2 |
3 | = Release Notes for 1.2.8 =
4 |
5 | There were no release notes for 1.2.8, as it was a quick turnaround fix for an issue that had come up after 1.2.7.
6 |
7 | From the email announcement about 1.2.8:
8 |
9 | A few bugs were fixed that warranted an immediate release of memcached
1.2.8. If you are running memcached prior to 1.2.7, please skip 1.2.7 and
upgrade straight to 1.2.8.
10 | *Please note* that the critical bug noted below only affects memcached
binaries built with --enable-threads. While highly recommended, this is
not the default in the 1.2 series. It is the default in 1.3.
11 |
12 | [http://groups.google.com/group/memcached/browse_thread/thread/ff96a9b88fb5d40e email about 1.2.8]
--------------------------------------------------------------------------------
/NewDevelopmentProcess.wiki:
--------------------------------------------------------------------------------
1 | #summary This is What We Do
2 |
3 | == Release Cycle ==
4 |
5 | * 3 weeks after each new stable release, we release -rc1 for the next release.
6 | * New -rc's will be kicked out daily or bidaily if there are fixes.
7 | * After 3 days in RC, unless there are still bug reports coming in, stable is released.
8 |
9 | This should lead to a stable release roughly once per month. Exceptions
10 | can be made, as usual. Major bug finds warrant earlier releases.
11 | Cycles with large code changes all at once might warrant
12 | an earlier cut to -rc1 and a 2-3 week -rc cycle.
13 |
14 | The release may be done by any of the core committers to memcached, but the process requires separation of contribution and review (i.e. the author does not commit/review their own changes).
15 |
16 | == Road Map ==
17 |
18 | Our primary goal is to finalize and release the [EngineInterface Storage Engine Interface]. This may end up including other changes.
19 |
20 | As this comes together, we'll further dicuss more formal roadmaps, pieced together from the various notes on the old wiki along with new discussions.
21 |
--------------------------------------------------------------------------------
/NewClusterMaint.wiki:
--------------------------------------------------------------------------------
1 | #summary Committee
2 |
3 | TODO: Fold this into ServerMaint to reduce frontpage links?
4 |
5 | = Capacity Planning =
6 |
7 | Setting up graphs (See [Tools] and similar) for longterm monitoring of many memcached values is an important part of capacity planning. Watch trends over time and draw lines to decide when it's time to invest in seeing what your aplication is up to or adding more memory to the cluster.
8 |
9 | = Upgrades =
10 |
11 | We're very careful about the high quality of memcached releases, but you should exercise proper caution when upgrading. Run a new release in any QA or dev environment you may have for a while, then upgrade a single server in production. If things look okay, slowly roll it out to the rest.
12 |
13 | Running clusters with mixed versions can be a headache for administrators when using monitoring.
14 |
15 | = Finding Outliers =
16 |
17 | If you're carefully graphing all servers, or using tools to monitor all servers, watch for outliers. Bugs in clients, or small numbers of hot keys can cause some servers to get much more traffic than others. Identify these before they become a hazard.
18 |
19 |
--------------------------------------------------------------------------------
/DevelopmentRepos.wiki:
--------------------------------------------------------------------------------
1 | = Development =
2 |
3 | If you are working on development, please publish your work and cc the mailing list as much as possible.
4 |
5 | The "master" tree in the central repo should always contain runnable, high quality code. We go out of our way to ensure nothing goes to the central repo without a barrage of regression tests and code reviews.
6 |
7 | That aside, use at your own risk.
8 |
9 | == The "central" github repo: ==
10 |
11 | Web View: http://github.com/memcached/memcached
12 |
13 | {{{
14 | git clone git://github.com/memcached/memcached.git
15 | }}}
16 |
17 | == Active Development ==
18 |
19 | We are actively developing against the 1.4 and 1.6 trees.
20 |
21 | New changes are *highly* preferred for 1.6 only. 1.4 should receive fewer
22 | updates.
23 |
24 | The 1.4 tree is the "master" branch", while 1.6 is the "engine-pu" branch.
25 |
26 | == Developer repos: ==
27 |
28 | * dormando: git://github.com/dormando/memcached.git
29 | * dustin: git://github.com/dustin/memcached.git
30 | * trond: git://github.com/trondn/memcached.git
31 |
32 | = Archive =
33 |
34 | * The community's moved to git distributed sccs for its development needs, but the original subversion repository is still available.
35 | * http://code.sixapart.com/svn/memcached/
36 |
--------------------------------------------------------------------------------
/ReleaseNotes.wiki:
--------------------------------------------------------------------------------
1 | #summary Quick links to release notes of various versions.
2 |
3 | = Recent Releases =
4 | * [ReleaseNotes1424] 1.4.24 (2015-4-25)
5 | * [ReleaseNotes1423] 1.4.23 (2015-4-19)
6 | * [ReleaseNotes1422] 1.4.22 (2014-12-31)
7 | * [ReleaseNotes1421] 1.4.21 (2014-10-12)
8 | * [ReleaseNotes1420] 1.4.20 (2014-5-11)
9 | * [ReleaseNotes1419] 1.4.19 (2014-5-1)
10 | * [ReleaseNotes1418] 1.4.18 (2014-4-17)
11 | * [ReleaseNotes1417] 1.4.17 (2013-12-20)
12 | * [ReleaseNotes1416] 1.4.16 (2013-12-9)
13 | * [ReleaseNotes1415] 1.4.15 (2012-9-3)
14 | * [ReleaseNotes1414] 1.4.14 (2012-7-30)
15 | * [ReleaseNotes1413] 1.4.13 (2012-2-2)
16 | * [ReleaseNotes1412] 1.4.12 (2012-2-1)
17 | * [ReleaseNotes1411] 1.4.11 (2012-1-16)
18 | * [ReleaseNotes1410] 1.4.10 (2011-11-9)
19 | * [ReleaseNotes149] 1.4.9 (2011-10-18)
20 | * [ReleaseNotes148] 1.4.8 (October 4th, 2011)
21 | * [ReleaseNotes147] 1.4.7 (August 16th, 2011)
22 | * [ReleaseNotes146] 1.4.6 (July 15th, 2011)
23 | * [ReleaseNotes145] 1.4.5 (April 3rd, 2010)
24 | * [ReleaseNotes144] 1.4.4 (November 26th 2009)
25 | * [ReleaseNotes143] 1.4.3 (November 7th, 2009)
26 | * [ReleaseNotes142] 1.4.2 (October 11th, 2009)
27 | * [ReleaseNotes141] 1.4.1 (August 29th, 2009)
28 | * [ReleaseNotes140] 1.4.0 (July 9th, 2009)
29 |
30 | == Notes for Old Releases ==
31 | * [ReleaseNotes127] 1.2.7 (April 3, 2009)
32 | * [ReleaseNotes128] 1.2.8 (April 11, 2009)
33 |
--------------------------------------------------------------------------------
/SASLHowto.wiki:
--------------------------------------------------------------------------------
1 | #summary HOWTO use SASL auth.
2 |
3 | = Introduction =
4 |
5 | In order to use memcached in a hostile network (e.g. a cloudy ISP where the infrastructure is shared and you can't control it), you're going to want some kind of way to keep people from messing with your cache servers.
6 |
7 | [http://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer SASL] (as described in [http://tools.ietf.org/html/rfc2222 RFC2222]) is a standard for adding authentication mechanisms to protocols in a way that is protocol independent.
8 |
9 | = Getting Started =
10 |
11 | In order to deploy memcached with SASL, you'll need two things:
12 |
13 | # A memcached server with SASL support (version 1.4.3 or greater built with `--enable-sasl`)
14 | # A client that supports SASL
15 |
16 | == Configuring SASL ==
17 |
18 | For the most part, you just do the normal SASL admin stuff.
19 |
20 | {{{
21 | # Create a user for memcached.
22 | saslpasswd2 -a memcached -c cacheuser
23 | }}}
24 |
25 | == Running Memcached ==
26 |
27 | In order to enable SASL support in the server you must use the `-S` flag.
28 |
29 | The `-S` flag does a few things things:
30 |
31 | # Enable all of the SASL commands.
32 | # Require binary protocol _only_.
33 | # Require authentication to have been successful before commands may be issued on a connection.
34 |
35 | == Further Info ==
36 |
37 | Read more about memcached's [SASLAuthProtocol SASL auth protocol].
--------------------------------------------------------------------------------
/NewCommunication.wiki:
--------------------------------------------------------------------------------
1 | #summary Speaking Memcached
2 |
3 | Memcached fosters an active mailing list and friendly IRC channel. Developers and users attend many technical conferences as well. If you're having trouble, odds are good the mailing list archives, or contacting the mailing list, will be a step in the right direction. Many smart people from many web companies chill out with us.
4 |
5 | If you wish to partake in the fun, please remember to be courteous and professional. We should aim to help people accomplish great things, and in doing so learn more ourselves. Remember that these channels are about the *open source project*, memcached. We're thankful for your commerical support and amazing products, but please keep advertising outside.
6 |
7 | * [http://groups.google.com/group/memcached Mailing list]
8 | * [http://lists.danga.com/mailman/listinfo/memcached Old Mailing List Archive]
9 | * IRC - irc.freenode.net #memcached
10 | * [Advocacy]
11 |
12 | = Wiki Contributions =
13 |
14 | Want to contribute to the wiki? Be our guest!
15 |
16 | The easiest way to get a change through is to e-mail a patch or phrase or page idea to the mailing list. Many wiki editors are active over there.
17 |
18 | If you would like to edit it yourself, or continue to make contributions more easily, simply ask the mailing list for wiki access. Restrictions are purely to avoid spam, and not to keep people from helping.
19 |
20 | Seriously, it's easy. We like documentation. Help out!
21 |
--------------------------------------------------------------------------------
/ReleaseNotes1413.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.13
2 |
3 | = Memcached 1.4.13 Release Notes =
4 |
5 | Date: 2012-2-2
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.13.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Really tiny release with some important build fixes which were accidentally
17 | omitted from 1.4.12.
18 |
19 | For the interesting meat see [ReleaseNotes1412] and [ReleaseNotes1411]
20 | especially, for slab memory reassignment!
21 |
22 | == Fixes ==
23 |
24 | * - Fix inline issue with older compilers (gcc 4.2.2)
25 | * Better detection of sasl_callback_ft
26 |
27 | == New Features ==
28 |
29 | Sigh.
30 |
31 | == Contributors ==
32 |
33 | The following people contributed to this release since 1.4.12.
34 |
35 | Note that this is based on who contributed changes, not how they were
36 | done. In many cases, a code snippet on the mailing list or a bug
37 | report ended up as a commit with your name on it.
38 |
39 | Note that this is just a summary of how many changes each person made
40 | which doesn't necessarily reflect how significant each change was.
41 | For details on what led up into a branch, either grab the git repo and
42 | look at the output of `git log 1.4.12..1.4.13` or use a web view.
43 |
44 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
45 | * Web View: http://github.com/memcached/memcached/commits/1.4.13
46 |
47 | {{{
48 | 1 Dustin Sallings
49 | 1 Steve Wills
50 | }}}
51 |
52 | == Control ==
53 |
--------------------------------------------------------------------------------
/ConventionGuide.wiki:
--------------------------------------------------------------------------------
1 | #summary Advocacy at conventions
2 |
3 | = TENTATIVE NOTES =
4 |
5 | [Advocacy From the Advocacy page]
6 |
7 | *If you are staffing the booth*
8 |
9 | If you are staffing the booth please wear unbranded, or memcached-branded clothing.
10 |
11 | *If you want to have marketing material at the booth*
12 |
13 | Having a well organized and inclusive booth is important to the memcached community. Our booths have helped many people answer questions, and sometimes given out fun materials.
14 |
15 | Only contributors are allowed within the memcached portfolio. If you are not a contributor there are a number of ways to become a contributor:
16 |
17 | {{{
18 | * Help review documentation and submit patches
19 | * Sponsor development
20 | * Participate actively in the lists
21 | }}}
22 |
23 | {{{
24 | * We welcome and invite all contributors to submit material to be presented
25 | to potential booth visitors.
26 | * Material presented at the booths will be memcached and memcached
27 | contributors only.
28 | * Material should be constructive, showing off the strengths and new abilities
29 | of memcached. Advocacy in the form of commercial alternatives or commercial
30 | support at the cost of the project are to be strictly avoided.
31 | }}}
32 |
33 | The material provided by contributors will be presented via a memcached "portfolio". Essentially a folder. The folder will be memcached branded, full color and very attractive. The requirement for submission to the folder is:
34 |
35 | * (1) 8.5x11, full color is fine, duplex is fine.
--------------------------------------------------------------------------------
/PlatformWindows.wiki:
--------------------------------------------------------------------------------
1 | Windows support is in the mainline now, so most of this is history. Build toolchain relies on mingw. Other than that, building memcached is like every other platform.
2 |
3 | Articles:
4 | * [http://stackoverflow.com/questions/8896/memcached-on-windows-x64 StackOverflow discussion]
5 | * [http://codebetter.com/blogs/karlseguin/archive/2008/07/07/scale-cheaply-memcached.aspx introduction to Memcached on Windows] (2008)
6 | * [http://pureform.wordpress.com/2008/01/10/installing-memcache-on-windows-for-php installing memcached on windows for php] (2008)
7 |
8 | Clients:
9 | * See the [Clients] page for Windows client libraries.
10 | * ASP.net Cache Provider & Session State Provider for memcached can be downloaded from the following link
11 | * http://www.codeplex.com/memcachedproviders
12 |
13 | Tools:
14 | * damemtop
15 | * Downloaded with memcached, in the "scripts" directory or see [http://github.com/memcached/memcached/tree/master/scripts/ the repo on github]
16 | * MemCacheD Manager, by Nick Pirocanac
17 | * http://allegiance.chi-town.com/MemCacheDManager.aspx
18 |
19 | Ports (older):
20 | * [http://github.com/CaptTofu/memcached source code], see 'win32' branch
21 | * [http://code.jellycan.com/memcached/ memcached for windows, version 1.2.6 and 1.2.5 ports] (Sept 2008)
22 | * [http://www.splinedancer.com/memcached-win32 memcached for windows, version 1.2.4 port]
23 | * [http://jehiah.cz/projects/memcached-win32 memcached for windows, version 1.2.1 port]
24 |
25 | Even older:
26 | * [http://www.ctoforaday.com/articles/000039.html windows port] (2004)
--------------------------------------------------------------------------------
/MonitorLinks.wiki:
--------------------------------------------------------------------------------
1 | #summary Tools for monitoring memcached
2 |
3 | * Cacti templates:
4 | * [http://code.google.com/p/mysql-cacti-templates/wiki/MemcachedTemplates Better Cacti Templates Memcached Templates]
5 | * [http://dealnews.com/developers/cacti/memcached.html Dealnews Cacti template]
6 | * http://forums.cacti.net/about14605.html
7 | * http://www.faemalia.net/mysqlUtils/
8 | * [http://tag1consulting.com/node/58 Multiple port per server Cacti Templates]
9 |
10 | * [http://code.google.com/p/phpmemcacheadmin/ PhpMemcacheAdmin]
11 | * [http://www.hyperic.com/products/managed/memcached-management.htm Hyperic plugin]
12 | * [http://search.cpan.org/~zigorou/Nagios-Plugins-Memcached-0.02/lib/Nagios/Plugins/Memcached.pm Nagios plugin]
13 | * Ganglia
14 | * http://ben.hartshorne.net/ganglia/ (2006)
15 | * http://www.hitflip.de/opensource.html
16 | * [http://livebookmark.net/journal/2008/05/21/memcachephp-stats-like-apcphp/ PHP-based control panel]
17 | * [http://munin.projects.linpro.no/wiki/plugin-memcache Munin plugin]
18 | * Ruby gem: memcache-client-stats
19 | * Python: [http://effbot.org/zone/django-memcached-view.htm django examples]
20 | * During development:
21 | * http://github.com/andrewfromgeni/mcinsight - GUI to examine memcached server
22 | * http://code.google.com/p/memcached-manager
23 | * Windows
24 | * MemCacheD Manager, by Nick Pirocanac
25 | * http://allegiance.chi-town.com/MemCacheDManager.aspx
26 |
27 | Or, just do a simple connect to the port where memcached is listening. You can run a simple command like 'version' or 'stats' to see if memcached is listening.
28 |
--------------------------------------------------------------------------------
/ReleaseNotes1424.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.24
2 |
3 | = Memcached 1.4.24 Release Notes =
4 |
5 | Date: 2015-4-25
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.24.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Bugfix release to replace 1.4.23. If you tried 1.4.23, please try this version instead. I apologize for any frustrations from the .23 release. If you see further issues, please report them quickly and we will look into them.
17 |
18 | == Fixes ==
19 |
20 | * relax timing glitch in the lru maintainer test
21 | * fix major off by one issue
22 |
23 | == New Features ==
24 |
25 | Please see: https://code.google.com/p/memcached/wiki/ReleaseNotes1423 for the changes new to 1.4.23, as they were significant.
26 |
27 | == Contributors ==
28 |
29 | The following people contributed to this release since 1.4.23.
30 |
31 | Note that this is based on who contributed changes, not how they were
32 | done. In many cases, a code snippet on the mailing list or a bug
33 | report ended up as a commit with your name on it.
34 |
35 | Note that this is just a summary of how many changes each person made
36 | which doesn't necessarily reflect how significant each change was.
37 | For details on what led up into a branch, either grab the git repo and
38 | look at the output of `git log 1.4.23..1.4.24` or use a web view.
39 |
40 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
41 | * Web View: http://github.com/memcached/memcached/commits/1.4.24
42 |
43 | {{{
44 | 3 dormando
45 |
46 | }}}
47 |
48 | == Control ==
49 |
--------------------------------------------------------------------------------
/ReleaseNotes149.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.9
2 |
3 | = Memcached 1.4.9 Release Notes =
4 |
5 | Date: 2011-10-18
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.9.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Small bugfix release. Mainly fixing a critical issue where using -c to
17 | increase the connection limit was broken in 1.4.8. If you are on 1.4.8, an
18 | upgrade is highly recommended.
19 |
20 | == Fixes ==
21 |
22 | * Add a systemd service file
23 | * Fix some minor typos in the protocol doc
24 | * Issue 224 - check retval of main event loop
25 | * Fix -c so maxconns can be raised above default.
26 |
27 | == New Features ==
28 |
29 | No new features in this version.
30 |
31 | == Contributors ==
32 |
33 | The following people contributed to this release since 1.4.8.
34 |
35 | Note that this is based on who contributed changes, not how they were
36 | done. In many cases, a code snippet on the mailing list or a bug
37 | report ended up as a commit with your name on it.
38 |
39 | Note that this is just a summary of how many changes each person made
40 | which doesn't necessarily reflect how significant each change was.
41 | For details on what led up into a branch, either grab the git repo and
42 | look at the output of `git log 1.4.8..1.4.9` or use a web view.
43 |
44 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
45 | * Web View: http://github.com/memcached/memcached/commits/1.4.9
46 |
47 | {{{
48 | 3 dormando
49 | 1 Matt Ingenthron
50 | 1 Miklos Vajna
51 | }}}
52 |
53 | == Control ==
54 |
--------------------------------------------------------------------------------
/ReleaseNotes1420.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.20
2 |
3 | = Memcached 1.4.20 Release Notes =
4 |
5 | Date: 2014-5-11
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.20.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Just one tiny change to fix a regression causing threads to lock up and spin
17 | max CPU.
18 |
19 | 1.4.18 and 1.4.19 were affected. 1.4.17 and earlier were not affected. If you
20 | are on .18 or .19 an upgrade to 1.4.20 is strongly advised.
21 |
22 | Thanks to commando.io for reporting the bug initially and putting up with me
23 | being blind for a few weeks.
24 |
25 | == Fixes ==
26 |
27 | * Fix a race condition causing new connections to appear closed, causing an inifinte loop.
28 |
29 |
30 | == New Features ==
31 |
32 | None, see 1.4.18 for new interesting features, or 1.4.19 for other useful
33 | bugfixes.
34 |
35 | == Contributors ==
36 |
37 | The following people contributed to this release since 1.4.19.
38 |
39 | Note that this is based on who contributed changes, not how they were
40 | done. In many cases, a code snippet on the mailing list or a bug
41 | report ended up as a commit with your name on it.
42 |
43 | Note that this is just a summary of how many changes each person made
44 | which doesn't necessarily reflect how significant each change was.
45 | For details on what led up into a branch, either grab the git repo and
46 | look at the output of `git log 1.4.19..1.4.20` or use a web view.
47 |
48 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
49 | * Web View: http://github.com/memcached/memcached/commits/1.4.20
50 |
51 | {{{
52 | 1 dormando
53 |
54 | }}}
55 |
56 | == Control ==
57 |
--------------------------------------------------------------------------------
/ReleaseNotes1419.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.19
2 |
3 | = Memcached 1.4.19 Release Notes =
4 |
5 | Date: 2014-5-1
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.19.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 |
17 | == Fixes ==
18 |
19 | * Fix endianness detection during configure.
20 | * Fixes a performance regression with binary protocol (up to 20%)
21 | * Fix rare segfault in incr/decr.
22 | * disable tail_repair_time by default.
23 | * Likely not needed anymore, and can rarely cause bugs.
24 | * use the right hashpower for the item_locks table. Small perf improvement.
25 | * Fix crash for LRU crawler while using lock elision (haswell+ processors)
26 |
27 |
28 | == New Features ==
29 |
30 | See the release notes for 1.4.18 for recent interesting features.
31 |
32 | == Contributors ==
33 |
34 | The following people contributed to this release since 1.4.18.
35 |
36 | Note that this is based on who contributed changes, not how they were
37 | done. In many cases, a code snippet on the mailing list or a bug
38 | report ended up as a commit with your name on it.
39 |
40 | Note that this is just a summary of how many changes each person made
41 | which doesn't necessarily reflect how significant each change was.
42 | For details on what led up into a branch, either grab the git repo and
43 | look at the output of `git log 1.4.18..1.4.19` or use a web view.
44 |
45 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
46 | * Web View: http://github.com/memcached/memcached/commits/1.4.19
47 |
48 | {{{
49 | 9 dormando
50 | 1 Dagobert Michelsen
51 | 1 Eric McConville
52 |
53 | }}}
54 |
55 | == Control ==
56 |
--------------------------------------------------------------------------------
/NewStart.wiki:
--------------------------------------------------------------------------------
1 | #summary Memcached Wiki
2 |
3 | = Memcached =
4 |
5 | Your friendly neighborhood memory cache daemon.
6 |
7 | == About ==
8 |
9 | * [NewOverview What Memcached Is]
10 | * [TutorialCachingStory An Adventure in Learning Memcached]
11 | * [NewCommunication Communicating With Users and Developers] - Mailing list, irc, etc.
12 | * [NewGlossary Glossary of Common Terms]
13 | * [NewLinks Tools and External Links] - Monitoring programs, tools, presentations, etc.
14 |
15 | == Installation ==
16 |
17 | * [NewInstallFromPackage Installing From Packages]
18 | * [NewInstallFromSource Installing From Source]
19 | * [ReleaseNotes Recent Release Notes]
20 | * [NewClients Finding Clients]
21 | * [NewHardware Hardware Considerations]
22 |
23 | == Configuration ==
24 |
25 | * [NewConfiguringServer Configuring Memcached]
26 | * [NewConfiguringClient Client Configurations]
27 | * [NewClustering Cluster Configurations]
28 |
29 | == Using Memcached ==
30 |
31 | * [NewCommands Protocol Commands]
32 | * [NewCommonFeatures Common Client Features]
33 | * [NewProgramming Using Memcached Basics]
34 | * [NewProgrammingFAQ Memcached Usage FAQ]
35 | * [NewProgrammingTricks Programming HOWTO's and Tricks]
36 | * [NewUserInternals Memcached Internals for End Users]
37 |
38 | == Maintenance ==
39 |
40 | * [NewServerMaint Maintaining a Server]
41 | * [NewClusterMaint Maintaining a Cluster]
42 | * [NewPerformance What To Expect With Performance]
43 |
44 | == Memcached Development ==
45 |
46 | * [DevelopmentRepos Source Code]
47 | * [NewDevelopmentProcess Process and Roadmap]
48 | * [NewInternals Memcached Internals for Developers]
49 | * [BuildFarm Build Farm]
50 | * [TestSuites Test Suites]
51 | * [NewProposals Proposals and RFCs]
52 | * [NewProtocols Protocols]
53 |
--------------------------------------------------------------------------------
/ReleaseNotes1412.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.12
2 |
3 | = Memcached 1.4.12 Release Notes =
4 |
5 | Date: 2012-2-1
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.12.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Fix a small number of bugs, mostly in building on different platforms.
17 |
18 | For the real meat, see [ReleaseNotes1411 1.4.11 Release Notes]
19 |
20 | == Fixes ==
21 |
22 | * fix glitch with flush_all (exptime)
23 | * Skip SASL tests unless RUN_SASL_TESTS is defined.
24 | * Look around for saslpasswd2 (typically not in the user's path).
25 | * build fix: Define sasl_callback_ft on older versions of sasl.
26 | * fix segfault when sending a zero byte command
27 | * fix warning in UDP test
28 | * properly detect GCC atomics
29 | * tests: loop on short binary packet reads
30 | * fix slabs_reassign tests on 32bit hosts
31 |
32 | == New Features ==
33 |
34 | Fewer bugs!
35 |
36 | == Contributors ==
37 |
38 | The following people contributed to this release since 1.4.11.
39 |
40 | Note that this is based on who contributed changes, not how they were
41 | done. In many cases, a code snippet on the mailing list or a bug
42 | report ended up as a commit with your name on it.
43 |
44 | Note that this is just a summary of how many changes each person made
45 | which doesn't necessarily reflect how significant each change was.
46 | For details on what led up into a branch, either grab the git repo and
47 | look at the output of `git log 1.4.11..1.4.12` or use a web view.
48 |
49 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
50 | * Web View: http://github.com/memcached/memcached/commits/1.4.12
51 |
52 | {{{
53 | 5 Dustin Sallings
54 | 5 dormando
55 | }}}
56 |
57 | == Control ==
58 |
--------------------------------------------------------------------------------
/ReleaseNotes1421.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.21
2 |
3 | = Memcached 1.4.21 Release Notes =
4 |
5 | Date: 2014-10-12
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.21.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 |
17 | == Fixes ==
18 |
19 | * makefile cleanups
20 | * Avoid OOM errors when locked items stuck in tail
21 |
22 | If clients occasionally fetch many items, more than can fit the TCP buffers, then hang for a very long period of time, that slab class could OOM. In older versions this could cause a crash. Since 1.4.20 this will cause OOM errors.
23 |
24 | Now, if a locked item lands in the LRU tail, it will be bumped back to the head and an lrutail_reflocked counter incremented. If you're concerned about having stuck clients, watch that counter.
25 |
26 | Big thanks to Jay Grizzard et all at Box for helping track this down!
27 |
28 | == New Features ==
29 |
30 | None.
31 |
32 | == Contributors ==
33 |
34 | The following people contributed to this release since 1.4.20.
35 |
36 | Note that this is based on who contributed changes, not how they were
37 | done. In many cases, a code snippet on the mailing list or a bug
38 | report ended up as a commit with your name on it.
39 |
40 | Note that this is just a summary of how many changes each person made
41 | which doesn't necessarily reflect how significant each change was.
42 | For details on what led up into a branch, either grab the git repo and
43 | look at the output of `git log 1.4.20..1.4.21` or use a web view.
44 |
45 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
46 | * Web View: http://github.com/memcached/memcached/commits/1.4.21
47 |
48 | {{{
49 | 4 Steve Wills
50 | 3 dormando
51 | 1 Jay Grizzard
52 |
53 | }}}
54 |
55 | == Control ==
56 |
--------------------------------------------------------------------------------
/Tools.wiki:
--------------------------------------------------------------------------------
1 | #summary memcached related tools
2 |
3 | = Introduction =
4 |
5 | This is the place to go for all things related to memcached tools. If you have something you want to add, drop a line to the mailing list and we'll get it here.
6 |
7 | = List of Interesting Memcached Tools =
8 |
9 | * `Cacti Templates`: Templates that let you use cacti to monitor your memcached instances.
10 | * http://code.google.com/p/mysql-cacti-templates/wiki/MemcachedTemplates
11 | * http://dealnews.com/developers/cacti/memcached.html
12 | * `Dormando's awesome Memcached Top`
13 | * http://github.com/dormando/damemtop
14 | * `mem-top`: A command-line tool that grabs real-time stats from memcache.
15 | * http://code.google.com/p/memcache-top
16 | * `Munin plugin`: Graphs of traffic/items/memory/etc via Munin
17 | * https://github.com/mhwest13/Memcached-Munin-Plugin" :D
18 | * `Wireshark`: now understands memcached binary protocol.
19 | * [http://www.wireshark.org/download/automated]
20 | * A plugin for wireshark that understands the textual protocol.
21 | * [http://dev.gear6.com/downloads/wireshark-memcache-protocol-dissector]
22 | * `mk-query-digest`: Maatkit's mk-query-digest understands the textual protocol and can do advanced analysis of the queries.
23 | * [http://www.maatkit.org/doc/mk-query-digest.html]
24 | * `Mempowered`: Windows powershell frontend to memcached
25 | * http://mempowered.codeplex.com/
26 | * `Brutis`: A tool designed to exercise memcache instances.
27 | * http://code.google.com/p/brutis
28 | * `Stats Proxy`: Allows you to view real-time status via your web browser
29 | * http://code.google.com/p/statsproxy
30 | * `Advanced Reporter`: Provides information about the most active clients and object keys for your memcached servers.
31 | * http://code.google.com/p/advancedreporter
--------------------------------------------------------------------------------
/BuildFarm.wiki:
--------------------------------------------------------------------------------
1 | #summary Memcached's Build Farm
2 |
3 | = The Build Farm =
4 |
5 | Link to the [http://builds.memcached.org/waterfall?category=memcached memcached build farm] waterfall status page.
6 |
7 | Memcached builds are tests against a variety of hosts and configurations via [http://buildbot.net/ buildbot] software before and after every bit of code is accepted.
8 |
9 | = Contributing =
10 |
11 | If you'd like to contribute, please install the latest version of the following tools available for your system:
12 |
13 | == Software Requirements ==
14 |
15 | * git
16 | * autotools (automake, autoconf)
17 | * libevent
18 | * gnu make
19 | * C compiler
20 | * buildbot
21 |
22 | == Availability Requirements ==
23 |
24 | We place no strict uptime requirements on build slaves, but it would be nice if they were generally available. If your slave goes down, we'll try to get hold of you to get it running again.
25 |
26 | == System Impact ==
27 |
28 | The build process for memcached is currently fairly lightweight and buildbot's slave overhead is minimal.
29 |
30 | Builds may be run most frequently during development cycles after a developer has performed a local test, and will always follow a push to the central repo.
31 |
32 | If you are providing two builders for a host (e.g. ± some feature), you may ask for us not to run those concurrently. For special cases, you may also request that two of your seemingly unrelated code do not run all or part of a build concurrently.
33 |
34 | Before a build, the minimal set of git objects required to represent a build tree are retrieved via git and an optional patch *may* be included inline.
35 |
36 | After a successful build, the tested memcached binary is gzipped and retrieved from the system.
37 |
38 | == Setting It Up ==
39 |
40 | Once that's all in place, contact the list and let us know you're ready to contribute. One of us will work with you on making it part of the official builds.
41 |
--------------------------------------------------------------------------------
/HowToLearnMoreScalability.wiki:
--------------------------------------------------------------------------------
1 | #summary Learn more about scalablity
2 |
3 | * [http://www.slideshare.net/vishnu/livejournals-backend-a-history-of-scaling Livejournal scaling] (The classic, 2005)
4 | * [http://www.slideshare.net/SergeyChernyshev/behind-the-scenes-at-livejournal-scaling-storytime-158352 USENIX 2007 version]
5 | * [http://www.slideshare.net/techdude/scalable-web-architectures-common-patterns-and-approaches Flickr scaling talk - Scalable web architectures] (The new classic)
6 | * [http://assets.en.oreilly.com/1/event/7/Capacity%20Management%20Presentation.ppt Flickr capacity management]
7 | * [http://www.scribd.com/doc/21793/Scaling-PHPMySQL-Presentation-from-Flickr Flickr hardware scaling] (2005)
8 | * [http://mashraqi.com/2008/07/memcached-for-mysql-advanced-use-cases_09.html Fotolog scaling]
9 | * [http://errtheblog.com/static/pdfs/memcached.pdf CNet scaling] (PDF, Ruby/Rails oriented)
10 | * [http://www.scribd.com/doc/49575/Scaling-Rails-Presentation Scribd scaling]
11 | * [http://www.slideshare.net/acme/scaling-with-memcached Scaling with memcached]
12 | * [http://www.scribd.com/doc/43868/Wikipedia-site-internals-workbook-2007 Wikipedia scaling] - has a few pages about caching and their memcached usage (2007)
13 | * Adventures in scaling - poocs.net
14 | * [http://poocs.net/2006/3/13/the-adventures-of-scaling-stage-1 Part 1]
15 | * [http://poocs.net/2006/3/20/the-adventures-of-scaling-stage-2 Part 2]
16 | * [http://poocs.net/2006/3/27/the-adventures-of-scaling-stage-3 Part 3]
17 | * [http://poocs.net/2006/4/3/the-adventures-of-scaling-stage-4 Part 4]
18 | * [http://highscalability.com/ highscalability.com - blog about scaling]
19 | * [http://www.jurriaanpersyn.com/archives/2009/02/12/database-sharding-at-netlog-with-mysql-and-php Netlog.com sharding] - with description of memcached role (2009)
20 | * [http://mysqldba.blogspot.com/2008/09/how-do-you-know-when-you-need-more.html memcached at RockYou] - how do you know when you need more memcached (2008)
--------------------------------------------------------------------------------
/ReleaseNotes1415.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.15
2 |
3 | = Memcached 1.4.15 Release Notes =
4 |
5 | Date: 2012-9-3
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.15.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a somewhat experimental release which pushes thread performance even
17 | more than before. Since this *is* a more experimental release than usual, and
18 | contains no other major fixes or features, we urge some caution for important
19 | deployments. We feel as though it is high quality software, but please take
20 | caution and do slow rollouts or testing. Thanks!
21 |
22 | == Fixes ==
23 |
24 | * Add some mild thread documentation
25 | * README.md was missing from dist tarball
26 | * Issue 286: --disable-coverage drops "-pthread" option
27 | * Reduce odds of getting OOM errors in some odd cases
28 |
29 | == New Features ==
30 |
31 | Thread scalability is much improved for reads, and somewhat improved for
32 | writes. In a pure read-only situation on a dual socket six core NUMA machine
33 | I've tested key fetch rates around 13.6 million keys per second.
34 |
35 | More tuning is necessary and you'd get significant lag at that rate, but that
36 | shows the theoretical limit of the locks.
37 |
38 | == Contributors ==
39 |
40 | The following people contributed to this release since 1.4.14.
41 |
42 | Note that this is based on who contributed changes, not how they were
43 | done. In many cases, a code snippet on the mailing list or a bug
44 | report ended up as a commit with your name on it.
45 |
46 | Note that this is just a summary of how many changes each person made
47 | which doesn't necessarily reflect how significant each change was.
48 | For details on what led up into a branch, either grab the git repo and
49 | look at the output of `git log 1.4.14..1.4.15` or use a web view.
50 |
51 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
52 | * Web View: http://github.com/memcached/memcached/commits/1.4.15
53 |
54 | {{{
55 | 6 dormando
56 | 1 Trond Norbye
57 | }}}
58 |
59 | == Control ==
60 |
--------------------------------------------------------------------------------
/NewInstallFromPackage.wiki:
--------------------------------------------------------------------------------
1 | #summary Installing Memcached Binaries
2 |
3 |
4 |
5 | The version of memcached you install matters a lot for what support will be available to you. Ancient versions lack bugfixes, statistics, etc. Commands may be missing, and we may not be able to assist you if the software is too old. Try to have at least 1.4.4 or higher, if that's not too difficult ;)
6 |
7 | = Dependencies =
8 |
9 | Memcached is a C program, depending on a recent version of GCC and a recent version of [http://www.monkey.org/~provos/libevent/ libevent]. The recommended method of installation is to first try your distribution's package manager. If the version it contains is too old, you may have to try installing from a backport, or by source.
10 |
11 | = Installing From Your Distribution =
12 |
13 | == Ubuntu & Debian ==
14 | {{{
15 | apt-get install memcached
16 | }}}
17 |
18 | You will also need libevent installed, and apt should fetch that for you.
19 |
20 | Be warned that most versions of ubuntu and debian have very old versions of memcached packaged. We're working on improving this situation. If you are a Debian user, [http://backports.org Debian Backports] may have a more recent version available.
21 |
22 | == Redhat/Fedora ==
23 | {{{
24 | yum install memcached
25 | }}}
26 |
27 | Pretty easy eh? Sadly you're likely to pull an old version still.
28 |
29 | == FreeBSD ==
30 |
31 | {{{
32 | portmaster databases/memcached
33 | }}}
34 |
35 | (or substitute whatever ports management tool you use)
36 |
37 | = Installing Clients =
38 |
39 | Some of the popular clients will likely be available in your distribution. Search with `apt` or `yum` and see what you can find!
40 |
41 | == libmemcached ==
42 |
43 | Most languages have one or two main clients which depend on [http://libmemcached.org libmemcached]. This is the standard C library for accessing memcached-speaking servers. Some clients will bundle a compatible version, and some will require it to be installed separately.
44 |
45 | == PEAR/CPAN/GEM/etc ==
46 |
47 | Don't forget to check the standard repositories for your preferred language. Installing a client might be a simple command or two.
--------------------------------------------------------------------------------
/ReleaseNotes141rc1.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.4.0-rc1
2 |
3 | = Memcached 1.4.1 RC1 Release Notes =
4 |
5 | Date: 2009-08-26 Wed
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.1-rc1.tar.gz
12 |
13 | == Overview ==
14 |
15 | This is a maintenance release consisting primarily of bug fixes.
16 |
17 | == Fixes ==
18 |
19 | === Criticial Fixes ===
20 |
21 | * Boundary condition during pipelined decoding caused crash (bug72)
22 | * Bad initialization during buffer realloc (bug77)
23 | * Buffer overrun in stats_prefix_find (bug79)
24 | * Memory corruption from negative and invalid item lengths (bug70)
25 |
26 | === Non-critical Fixes ===
27 |
28 | * Update flush stats for binary flushes (bug71)
29 | * Build fixes for OpenBSD
30 | * Build fixes for Solaris/gcc
31 | * Cleanup warnings brought to us by OpenBSD (sprintf, etc...)
32 | * Lots of fixes with the test tools
33 | * Various documentation cleanups
34 |
35 | == New Features ==
36 |
37 | * stats slabs returns the number of requested bytes as mem_requested
38 | * memcached can bind to ephemeral ports (used for testing)
39 |
40 | == Contributors ==
41 |
42 | The following people contributed to this release since 1.4.0.
43 |
44 | Note that this is based on who contributed changes, not how they were
45 | done. In many cases, a code snippet on the mailing list or a bug
46 | report ended up as a commit with your name on it.
47 |
48 | Note that this is just a summary of how many changes each person made
49 | which doesn't necessarily reflect how significant each change was.
50 | For details on what led up into a branch, either grab the git repo and
51 | look at the output of `git log 1.4.0..1.4.1-rc1` or use a web view.
52 |
53 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
54 | * Web View: http://github.com/memcached/memcached/commits/1.4.1-rc1
55 |
56 | {{{
57 | 16 Dustin Sallings
58 | 8 Trond Norbye
59 | 2 dormando
60 | 1 Mat Hostetter
61 | 1 Matt Ingenthron
62 | 1 Monty Taylor
63 | 1 Steve Yen
64 | 1 Adam Thomason
65 | 1 Cosimo Streppone
66 | }}}
--------------------------------------------------------------------------------
/ReleaseNotes141.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.4.1
2 |
3 | = Memcached 1.4.1 Release Notes =
4 |
5 | Date: 2009-08-29 Sat
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.1.tar.gz
12 |
13 | == Overview ==
14 |
15 | This is a maintenance release consisting primarily of bug fixes.
16 |
17 | == Fixes ==
18 |
19 | === Criticial Fixes ===
20 |
21 | * Boundary condition during pipelined decoding caused crash (bug72)
22 | * Bad initialization during buffer realloc (bug77)
23 | * Buffer overrun in stats_prefix_find (bug79)
24 | * Memory corruption from negative and invalid item lengths (bug70)
25 |
26 | === Non-critical Fixes ===
27 |
28 | * Update flush stats for binary flushes (bug71)
29 | * Build fixes for OpenBSD
30 | * Build fixes for Solaris/gcc
31 | * Cleanup warnings brought to us by OpenBSD (sprintf, etc...)
32 | * Lots of fixes with the test tools
33 | * Various documentation cleanups
34 | * RPM spec autoupdate
35 |
36 | == New Features ==
37 |
38 | * stats slabs returns the number of requested bytes as mem_requested
39 | * memcached can bind to ephemeral ports (used for testing)
40 |
41 | == Contributors ==
42 |
43 | The following people contributed to this release since 1.4.0.
44 |
45 | Note that this is based on who contributed changes, not how they were
46 | done. In many cases, a code snippet on the mailing list or a bug
47 | report ended up as a commit with your name on it.
48 |
49 | Note that this is just a summary of how many changes each person made
50 | which doesn't necessarily reflect how significant each change was.
51 | For details on what led up into a branch, either grab the git repo and
52 | look at the output of `git log 1.4.0..1.4.1` or use a web view.
53 |
54 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
55 | * Web View: http://github.com/memcached/memcached/commits/1.4.1
56 |
57 | {{{
58 | 18 Dustin Sallings
59 | 8 Trond Norbye
60 | 2 dormando
61 | 1 Mat Hostetter
62 | 1 Adam Thomason
63 | 1 Monty Taylor
64 | 1 Steve Yen
65 | 1 Matt Ingenthron
66 | 1 Cosimo Streppone
67 | 1 James Cohen
68 | }}}
--------------------------------------------------------------------------------
/ReleaseNotes1422.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.22
2 |
3 | = Memcached 1.4.22 Release Notes =
4 |
5 | Date: 2014-12-31
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.22.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Bugfix maintenance release. Fixes to hash table expansion now completely hang all threads very briefly while the hash table pointers are swapped. Once swapped, it unlocks and uses locks as normal.
17 |
18 | In previous versions, the hash table was switched to a global lock instead of a map of bucket-locks during expansion. This should be faster overall with a small latency penalty. It's possible to presize the hash table with `-o hashpower`
19 |
20 | == Fixes ==
21 |
22 | * gatkq: return key in response
23 | * Handle SIGTERM the same as SIGINT
24 | * Fix off-by-one causing segfault in lru_crawler
25 | * Fix potential corruption for incr/decr of 0b items
26 | * Fix issue #369 - uninitialized stats_lock
27 | * issue#370: slab re-balance is not thread-safe in function do_item_get
28 | * Fix potential corruption in hash table expansion
29 | * use item lock instead of global lock when hash expanding
30 | * fix hang when lru crawler started from commandline
31 | * rename thread_init to avoid runtime failure on AIX
32 | * Support -V (version option)
33 |
34 |
35 | == New Features ==
36 |
37 |
38 | == Contributors ==
39 |
40 | The following people contributed to this release since 1.4.21.
41 |
42 | Note that this is based on who contributed changes, not how they were
43 | done. In many cases, a code snippet on the mailing list or a bug
44 | report ended up as a commit with your name on it.
45 |
46 | Note that this is just a summary of how many changes each person made
47 | which doesn't necessarily reflect how significant each change was.
48 | For details on what led up into a branch, either grab the git repo and
49 | look at the output of `git log 1.4.21..1.4.22` or use a web view.
50 |
51 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
52 | * Web View: http://github.com/memcached/memcached/commits/1.4.22
53 |
54 | {{{
55 | 6 dormando
56 | 2 Jason CHAN
57 | 1 Dan McGee
58 | 1 Menghan
59 | 1 Mike Dillon
60 | 1 Oskari Saarenmaa
61 | 1 clark.kang
62 | 1 mckelvin
63 |
64 | }}}
65 |
66 | == Control ==
67 |
--------------------------------------------------------------------------------
/BoothGuide.wiki:
--------------------------------------------------------------------------------
1 | #summary Memcached booth guide
2 |
3 | *TENTATIVE NOTES*
4 |
5 | [Advocacy From the Advocacy page]
6 |
7 |
8 |
9 | = Organize =
10 |
11 | == Group organizer ==
12 |
13 | A single organizer must be end to end responsible for the group. It's required that the organizer inform the memcached mailing list of their intent to host a booth prior to registering with the convention. This is to rule out any possible duplication of effort, confusion, and help get more people involved earlier in the process.
14 |
15 | == Paperwork ==
16 |
17 | The event organizers should provide you with a timeline for booth space requests & let you know what information they need. Make sure you fill out forms on time, and keep copies for yourself. If you miss a form, you can end up without a booth. Depending on the booth, tax forms and other paperwork may be required as well.
18 |
19 | The conference organizers should also send you (or make available on their website) the following information:
20 |
21 | * size of your booth (this will help you plan which displays are appropriate for the space)
22 | * number of tables & chairs provided/allowed
23 | * location of power & network drops
24 | * times you can access the event hall outside of event hours for setup/teardown
25 |
26 | == Booth staff ==
27 |
28 | Recruit enough volunteers to keep at least 1-2, preferrably 2-3 people at the booth at all times. Do this at least several weeks beforehand. You will need to ensure the convention has enough vendor passes available for all folks who otherwise could not attend the conference.
29 |
30 | = Prepare =
31 |
32 | == Literature ==
33 |
34 | Need to flesh this out.
35 |
36 | == Swag ==
37 |
38 | Floppies were awesome! Fliers are cool too. We have no use for paid stuff yet.
39 |
40 | == Banners/Signs/etc ==
41 |
42 | Etc.
43 |
44 | == Demo ==
45 |
46 | Notes notes.
47 |
48 | = Executing =
49 |
50 | == Setup and Takedown ==
51 |
52 | Most venues will require you to "hand-carry" in your materials or pay through the nose for professional help. This means that your entire booth needs to break down into boxes and bags which can be carried by a regular-strength geek onto a standard escalator. Folding dollies, or wheeled luggage of a "carry-on" size are generally also allowed but full-size dollies seldom are.
53 |
54 | == Uniform ==
55 |
56 | Wear memcached.org-related clothing if available. Otherwise branding commercial competitors of memcached is not allowed.
57 |
58 | == Supplies ==
59 |
60 | Don't forget tape and stuff (TBD)
--------------------------------------------------------------------------------
/ReleaseNotes144.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.4.4
2 |
3 | = Memcached 1.4.4 Release Notes =
4 |
5 | Date: 2009-11-26 Sat
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.4.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a maintenance release of memcached with a workaround for
17 | common client issue as well as a few new stats.
18 |
19 | == Fixes ==
20 |
21 | === Add partial backwards compatibility for delete with timeout 0 ===
22 |
23 | Before version 1.4.0, there was an optional argument to delete that
24 | would allow a client to specify that a deleted object should exist in
25 | the cache after the deletion occurred such that add operations would
26 | fail even though objects did not appear in the cache.
27 |
28 | This feature was removed completely in 1.4.0, but a parser bug caused
29 | it to slip through. The bug was fixed in 1.4.3. If anyone was
30 | attempting to use it legitimately in the 1.4 series, it would simply
31 | not work as expected.
32 |
33 | The 1.4.4 backwards compatibility change allows specifically the value
34 | of 0 (i.e. non-lingering delete), while continuing to reject others.
35 | This will satisfy clients that always wish to send a value even when
36 | they do not wish the item to linger.
37 |
38 | == New Features ==
39 |
40 | === New Stats ===
41 |
42 | ==== auth_enabled_sasl ====
43 |
44 | This is a general stat that indicates whether SASL authentication is
45 | enabled or not.
46 |
47 | ==== auth_cmds ====
48 |
49 | Indicates the total number of authentication attempts.
50 |
51 | ==== auth_errors ====
52 |
53 | Indicates the number of failed authentication attempts.
54 |
55 | == Contributors ==
56 |
57 | The following people contributed to this release since 1.4.3.
58 |
59 | Note that this is based on who contributed changes, not how they were
60 | done. In many cases, a code snippet on the mailing list or a bug
61 | report ended up as a commit with your name on it.
62 |
63 | Note that this is just a summary of how many changes each person made
64 | which doesn't necessarily reflect how significant each change was.
65 | For details on what led up into a branch, either grab the git repo and
66 | look at the output of `git log 1.4.3..1.4.4` or use a web view.
67 |
68 | * Repo list: [http://code.google.com/p/memcached/wiki/DevelopmentRepos]
69 | * Web View: [http://github.com/memcached/memcached/commits/1.4.4]
70 |
71 | {{{
72 | 2 Dustin Sallings
73 | 2 Matt Ingenthron
74 | 1 dormando
75 | }}}
76 |
--------------------------------------------------------------------------------
/MemcachedOffspring.wiki:
--------------------------------------------------------------------------------
1 | #summary Open source software projects forked/inspired from, related to memcached
2 |
3 | == Forks and patchsets ==
4 |
5 | * http://code.google.com/p/memcachedb - A distributed key-value storage system designed for persistent, based on memcached, tugela and bdb.
6 | * http://code.google.com/p/dbcached - memcached + qdbm/berkeley-db
7 | * http://meta.wikimedia.org/wiki/Tugela_Cache - memcached + persistence
8 | * http://repcached.lab.klab.org - memcached + replication
9 | * http://code.google.com/p/memcached-pro
10 |
11 | == Reimplementations ==
12 |
13 | * http://code.google.com/p/spcached/
14 | * http://tokyocabinet.sourceforge.net/tyrantdoc/ - HTTP and memcached protocol networking frontend to tokyo cabinet
15 | * Java versions
16 | * http://www.thimbleware.com/projects/jmemcached
17 | * http://code.google.com/p/jemcache/
18 | * http://code.google.com/p/gmemcached - on Grizzly NIO framework
19 | * http://www.jgroups.org/javagroupsnew/docs/memcached/memcached.html - jgroups version of memcached
20 | * http://www.infoq.com/news/2008/10/jgroups-memcached
21 | * http://code.google.com/p/actord - scala version of memcached
22 | * http://code.google.com/p/cacherl - erlang version of memcached
23 | * Queues based on memcached protocol
24 | * http://memcachedb.org/memcacheq - queue based on memcached protocol
25 | * http://code.google.com/p/memqd
26 | * http://rubyforge.org/projects/starling - ruby based, from twitter
27 | * http://robey.lag.net/2008/11/27/scarling-to-kestrel.html - scala based queue, from twitter
28 | * http://code.google.com/p/sparrow - ruby based
29 | * http://code.google.com/p/peafowl - python based
30 | * http://xph.us/software/beanstalkd - incompatible protocol was inspired by memcached protocol
31 | * http://labs.gree.jp/Top/OpenSource/Flared.html - yet another memcached
32 | * http://overlayweaver.sourceforge.net - research on overlay networks, recently speaks memcached protocol
33 |
34 | == Memcached proxies ==
35 |
36 | * http://github.com/northscale/moxi
37 | * http://code.google.com/p/memagent
38 |
39 | == Other ==
40 |
41 | * memcached session manager
42 | * http://code.google.com/p/memcached-session-manager/
43 | * http://memcachefs.sourceforge.net
44 | * http://github.com/andrewfromgeni/mcinsight - GUI to examine memcached server
45 | * Accessing memcached directly from Apache
46 | * http://tangent.org/608/mod_memcached.html
47 | * http://code.google.com/p/modmemcachecache/
48 | * HTTP caching frontend to memcached
49 | * http://tomayko.com/writings/rack-cache-announce = Rack::Cache
--------------------------------------------------------------------------------
/NewInstallFromSource.wiki:
--------------------------------------------------------------------------------
1 | #summary Using the Source, Luke
2 |
3 |
4 |
5 | = Why Build From Source =
6 |
7 | Before you build from source, consider why? If you have a perfectly good package of a recent version, you're better off using that.
8 |
9 | = Building From Source =
10 | == Prereqs ==
11 | You'll likely need to install the development package for libevent
12 | * *Ubuntu:* {{{apt-get install libevent-dev}}}
13 | * *Redhat/Fedora:* {{{yum install libevent-devel}}}
14 | == Get ==
15 | {{{
16 | wget http://memcached.org/latest
17 | tar -zxvf memcached-1.x.x.tar.gz
18 | cd memcached-1.x.x
19 | }}}
20 |
21 | == Config ==
22 | === Optional install destination ===
23 | If your compiling from source you likely want to specify a destination directory as well, replace {{{/usr/local/memcached}}} with whatever you fancy.
24 | {{{
25 | ./configure --prefix=/usr/local/memcached
26 | }}}
27 | == Make and install ==
28 | {{{
29 | make && make test
30 | sudo make install
31 | }}}
32 |
33 | If you wish to build with SASL support, ensure the cyrus-sasl libraries are built and run `./configure --enable-sasl`. See the [SASLHowto] for more information.
34 |
35 | = To Build a Package, or `make install` ? =
36 |
37 | If you're deploying memcached to more than one server, you probably really want to package it. That way you may have cleaner updates, easy uninstalls, easy re-installs, future installs, etc. `make install` is for developers and chumps.
38 |
39 | == Building an RPM ==
40 |
41 | The memcached source tarball has contained a workable .spec file. To use it, create a build directory for RPM and compile memcached using the commands below. *Do not* run this as root, as tests will not pass.
42 | {{{
43 | echo "%_topdir /home/you/rpmbuild" >> ~/.rpmmacros
44 | mkdir -p /home/you/rpmbuild/{SPECS,BUILD,SRPMS,RPMS,SOURCES}
45 | wget http://memcached.org/latest
46 | rpmbuild -ta memcached-1.x.x.tar.gz
47 | }}}
48 | You will need gcc and libevent-devel installed. (`yum install gcc libevent libevent-devel`)
49 |
50 | Then install the RPM via a standard `rpm -Uvh memcached-etc.rpm`
51 |
52 | == Building a deb ==
53 |
54 | TODO: this section
55 |
56 | = Building clients =
57 |
58 | Note that many clients depend on [http://libmemcached.org libmemcached]. They either include it in their sources, or require an external build. You can follow the above practices for fetching and installing libmemcached as well.
59 |
60 | == PEAR/CPAN/GEM/etc ==
61 |
62 | If you're building from source, especially remember that most major languages have distribution systems which make installation easy.
--------------------------------------------------------------------------------
/ReleaseNotes147.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.7
2 |
3 | = Memcached 1.4.7 Release Notes =
4 |
5 | Date: 2011-08-16
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.7.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a maintenance release with many small bugfixes. Now (mostly) immune
17 | from time travelers.
18 |
19 | == Fixes ==
20 |
21 | * Use a monotonically increasing timer
22 | * Immediately expire items when given a negative expiration time
23 | * fix memcached-tool to print about all slabs
24 | * Properly daemonize memcached for debian
25 | * Don't permanently close UDP listeners on error
26 | * Allow memcached-init to start multiple instances (not recommended)
27 | * Issue 214: Search for network libraries before searching for libevent
28 | * Issue 213: Search for clock_gettime in librt
29 | * Issue 115: accont for CAS in item_size_ok
30 | * Fix incredibly slim race for maxconns handler. Should no longer hang ever
31 | * Issue 183 - Reclaim items dead by flush_all
32 | * Issue 200: Don't fire dtrace probe as the last thing in a function
33 |
34 |
35 | == New Features ==
36 |
37 | === Montonic Clock ===
38 |
39 | This isn't really a feature, but is the main change. If your system has
40 | clock_gettime with CLOCK_MONOTONIC support, memcached will attempt to use it.
41 | If your clock does wild adjustments, memcached will do its best to continue to
42 | count forward and not backward.
43 |
44 | However, if you use the "expiration is an absolute time" feature, where
45 | specifying an value expiration time as a specific date, it can still break.
46 | You must ensure that memcached is started after your clocks have been
47 | synchronized. This has always been the case, though.
48 |
49 | == Contributors ==
50 |
51 | The following people contributed to this release since 1.4.6.
52 |
53 | Note that this is based on who contributed changes, not how they were
54 | done. In many cases, a code snippet on the mailing list or a bug
55 | report ended up as a commit with your name on it.
56 |
57 | Note that this is just a summary of how many changes each person made
58 | which doesn't necessarily reflect how significant each change was.
59 | For details on what led up into a branch, either grab the git repo and
60 | look at the output of `git log 1.4.6..1.4.7` or use a web view.
61 |
62 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
63 | * Web View: http://github.com/memcached/memcached/commits/1.4.7
64 |
65 | {{{
66 | 9 dormando
67 | 6 Trond Norbye
68 | 1 Clint Byrum
69 | 1 Gordon Franke
70 | }}}
71 |
72 |
73 | == Control ==
74 |
--------------------------------------------------------------------------------
/ReleaseNotes146rc1.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.6-rc1
2 |
3 | = Memcached 1.4.6-rc1 Release Notes =
4 |
5 | Date: 2011-07-12
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.6_rc1.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a maintenance release with some build fixes, many small bug fixes, and
17 | a few major bug fixes. incr/decr are now actually atomic, and a crash with
18 | hitting the max connection limit while using multiple interfaces has been
19 | fixed.
20 |
21 |
22 | == Fixes ==
23 |
24 | * fix race crash for accepting new connections
25 | * fix incr/decr race conditions for binary prot
26 | * fix incr/decr race conditions for ASCII prot
27 | * Compile fix (-Werror=unused-but-set-variable warnings)
28 | * Bind each UDP socket to an a single worker thread in multiport env
29 | * Add support for using multiple ports
30 | * Issue 154: pid file out of sync (created before socket binding)
31 | * Issue 163: Buggy mem_requested values
32 | * Fix cross compilation issues in configure
33 | * Issue 140 - Fix age for items stats
34 | * Issue 131 - ChangeLog is outdated
35 | * Issue 155: bind to multiple interface
36 | * Issue 161 incorrect allocation in cache_create
37 | * Fix type-punning issues exposed with GCC 4.5.1
38 | * Simplify stats aggregation code
39 | * Reverse backward expected/actual params in test
40 | * Issue 152: Fix error message from mget
41 | * Refuse to start if we detect libevent 1.[12]
42 | * Fix compilation issue on Solaris 9 wrt isspace() macro - Resolves issue 111
43 |
44 |
45 | == New Features ==
46 |
47 | === Multiple port binding ===
48 |
49 | You may now specify multiple addresses by listing -l multiple times.
50 |
51 | == Contributors ==
52 |
53 | The following people contributed to this release since 1.4.5.
54 |
55 | Note that this is based on who contributed changes, not how they were
56 | done. In many cases, a code snippet on the mailing list or a bug
57 | report ended up as a commit with your name on it.
58 |
59 | Note that this is just a summary of how many changes each person made
60 | which doesn't necessarily reflect how significant each change was.
61 | For details on what led up into a branch, either grab the git repo and
62 | look at the output of `git log 1.4.5..1.4.6-rc1` or use a web view.
63 |
64 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
65 | * Web View: http://github.com/memcached/memcached/commits/1.4.6-rc1
66 |
67 | {{{
68 | 11 Trond Norbye
69 | 6 dormando
70 | 5 Dan McGee
71 | 1 Jon Jensen
72 | 1 Paul Lindner
73 | }}}
74 |
75 |
76 | == Control ==
77 |
--------------------------------------------------------------------------------
/ReleaseNotes147rc1.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.7-rc1
2 |
3 | = Memcached 1.4.7-rc1 Release Notes =
4 |
5 | Date: 2011-08-10
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.7_rc1.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a maintenance release with many small bugfixes. Now (mostly) immune
17 | from time travelers.
18 |
19 | == Fixes ==
20 |
21 | * Use a monotonically increasing timer
22 | * Immediately expire items when given a negative expiration time
23 | * fix memcached-tool to print about all slabs
24 | * Properly daemonize memcached for debian
25 | * Don't permanently close UDP listeners on error
26 | * Allow memcached-init to start multiple instances (not recommended)
27 | * Issue 214: Search for network libraries before searching for libevent
28 | * Issue 213: Search for clock_gettime in librt
29 | * Issue 115: accont for CAS in item_size_ok
30 | * Fix incredibly slim race for maxconns handler. Should no longer hang ever
31 | * Issue 183 - Reclaim items dead by flush_all
32 | * Issue 200: Don't fire dtrace probe as the last thing in a function
33 |
34 |
35 | == New Features ==
36 |
37 | === Montonic Clock ===
38 |
39 | This isn't really a feature, but is the main change. If your system has
40 | clock_gettime with CLOCK_MONOTONIC support, memcached will attempt to use it.
41 | If your clock does wild adjustments, memcached will do its best to continue to
42 | count forward and not backward.
43 |
44 | However, if you use the "expiration is an absolute time" feature, where
45 | specifying an value expiration time as a specific date, it can still break.
46 | You must ensure that memcached is started after your clocks have been
47 | synchronized. This has always been the case, though.
48 |
49 | == Contributors ==
50 |
51 | The following people contributed to this release since 1.4.6.
52 |
53 | Note that this is based on who contributed changes, not how they were
54 | done. In many cases, a code snippet on the mailing list or a bug
55 | report ended up as a commit with your name on it.
56 |
57 | Note that this is just a summary of how many changes each person made
58 | which doesn't necessarily reflect how significant each change was.
59 | For details on what led up into a branch, either grab the git repo and
60 | look at the output of `git log 1.4.6..1.4.7-rc1` or use a web view.
61 |
62 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
63 | * Web View: http://github.com/memcached/memcached/commits/1.4.7-rc1
64 |
65 | {{{
66 | 9 dormando
67 | 6 Trond Norbye
68 | 1 Clint Byrum
69 | 1 Gordon Franke
70 | }}}
71 |
72 |
73 | == Control ==
74 |
--------------------------------------------------------------------------------
/ReleaseNotes145.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.5
2 |
3 | = Memcached 1.4.5 Release Notes =
4 |
5 | Date: 2010-04-03
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.5.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a maintenance release with some build fixes, doc fixes, and
17 | one new stat.
18 |
19 |
20 | == Fixes ==
21 |
22 | * Properly detect CPU alignment on ARM. bug100
23 | * Remove 1MB assertion. bug 119
24 | * More automake versions supported.
25 | * Compiler warning fixes for OpenBSD.
26 | * potential buffer overflow in vperror
27 | * Report errors opening pidfiles using vperror
28 |
29 |
30 | == New Features ==
31 |
32 |
33 | === New stat: reclaimed ===
34 | This stat reports the number of times an entry was stored using memory
35 | from an expired entry.
36 |
37 |
38 | === sasl_pwdb for more simple auth deployments ===
39 |
40 | --enable-sasl-pwdb allows memcached to use it's own password file and
41 | verify a plaintext password.
42 |
43 | The file is specified with the environment variable
44 | MEMCACHED_SASL_PWDB, and is a plain text file with the following
45 | syntax:
46 |
47 | {{{
48 | username:password
49 | }}}
50 |
51 | Please note that you have to specify "mech_list: plain" in your sasl
52 | config file for this to work.
53 |
54 | Ex:
55 |
56 | {{{
57 | echo "mech_list: plain" > memcached.conf
58 | echo "myname:mypass" > /tmp/memcached-sasl-db
59 | export MEMCACHED_SASL_PWDB=/tmp/memcached-sasl-db
60 | export SASL_CONF_PATH=`pwd`/memcached.conf
61 | ./memcached -S -v
62 | }}}
63 |
64 | and you should be able to use your favorite memcached client with sasl
65 | support to connect to the server.
66 |
67 | (Please note that not all SASL implementations support
68 | SASL_CB_GETCONF, so you may have to install the sasl config
69 | (memcached.conf) to the systemwide location)
70 |
71 |
72 | == Contributors ==
73 |
74 | The following people contributed to this release since 1.4.4.
75 |
76 | Note that this is based on who contributed changes, not how they were
77 | done. In many cases, a code snippet on the mailing list or a bug
78 | report ended up as a commit with your name on it.
79 |
80 | Note that this is just a summary of how many changes each person made
81 | which doesn't necessarily reflect how significant each change was.
82 | For details on what led up into a branch, either grab the git repo and
83 | look at the output of `git log 1.4.4..1.4.5` or use a web view.
84 |
85 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
86 | * Web View: http://github.com/memcached/memcached/commits/1.4.5
87 |
88 | {{{
89 | 6 Trond Norbye
90 | 3 Paul Lindner
91 | 2 Dustin Sallings
92 | 1 Brad Fitzpatrick
93 | 1 Jørgen Austvik
94 | }}}
95 |
96 |
97 | == Control ==
--------------------------------------------------------------------------------
/ReleaseNotes146.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.6
2 |
3 | = Memcached 1.4.6 Release Notes =
4 |
5 | Date: 2011-07-15
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.6.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a maintenance release with some build fixes, many small bug fixes, and
17 | a few major bug fixes. incr/decr are now actually atomic, and a crash with
18 | hitting the max connection limit while using multiple interfaces has been
19 | fixed.
20 |
21 |
22 | == Fixes ==
23 |
24 | * Gcc on Solaris sparc wants -R and not -rpath
25 | * Issue 121: Set the runtime path when --with-libevent is used
26 | * Fix autogen failure when unable to find supported command.
27 | * fix race crash for accepting new connections
28 | * fix incr/decr race conditions for binary prot
29 | * fix incr/decr race conditions for ASCII prot
30 | * Compile fix (-Werror=unused-but-set-variable warnings)
31 | * Bind each UDP socket to an a single worker thread in multiport env
32 | * Add support for using multiple ports
33 | * Issue 154: pid file out of sync (created before socket binding)
34 | * Issue 163: Buggy mem_requested values
35 | * Fix cross compilation issues in configure
36 | * Issue 140 - Fix age for items stats
37 | * Issue 131 - ChangeLog is outdated
38 | * Issue 155: bind to multiple interface
39 | * Issue 161 incorrect allocation in cache_create
40 | * Fix type-punning issues exposed with GCC 4.5.1
41 | * Simplify stats aggregation code
42 | * Reverse backward expected/actual params in test
43 | * Issue 152: Fix error message from mget
44 | * Refuse to start if we detect libevent 1.[12]
45 | * Fix compilation issue on Solaris 9 wrt isspace() macro - Resolves issue 111
46 |
47 |
48 | == New Features ==
49 |
50 | === Multiple port binding ===
51 |
52 | You may now specify multiple addresses by listing -l multiple times.
53 |
54 | == Contributors ==
55 |
56 | The following people contributed to this release since 1.4.5.
57 |
58 | Note that this is based on who contributed changes, not how they were
59 | done. In many cases, a code snippet on the mailing list or a bug
60 | report ended up as a commit with your name on it.
61 |
62 | Note that this is just a summary of how many changes each person made
63 | which doesn't necessarily reflect how significant each change was.
64 | For details on what led up into a branch, either grab the git repo and
65 | look at the output of `git log 1.4.5..1.4.6` or use a web view.
66 |
67 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
68 | * Web View: http://github.com/memcached/memcached/commits/1.4.6
69 |
70 | {{{
71 | 13 Trond Norbye
72 | 6 dormando
73 | 5 Dan McGee
74 | 2 Paul Lindner
75 | 1 Jon Jensen
76 | 1 nirvanazc
77 | }}}
78 |
79 |
80 | == Control ==
81 |
--------------------------------------------------------------------------------
/ReleaseNotes127.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.2.7
2 |
3 | = Memcached 1.2.7 Release Notes =
4 |
5 | Date: 2009-04-03 Fri
6 |
7 |
8 | == Download ==
9 |
10 | Download link:
11 |
12 | http://memcached.googlecode.com/files/memcached-1.2.7.tar.gz
13 |
14 | == Notes ==
15 |
16 | With the release of memcached 1.2.7, the 1.2 tree is now officially in
17 | maintenance mode. Only bugfixes and very minor improvements will be
18 | added to the 1.2 tree. All development is now happening on the 1.3
19 | tree. Efforts are now being made to stabilize the 1.3 tree into a 1.4
20 | series stable release. Please help test :)
21 |
22 | 1.2.7 appears to be a good, stable release, and is a decent farewell
23 | to the codebase that has helped scale many companies.
24 |
25 | -Dormando
26 |
27 | == Features ==
28 |
29 | * UDP/TCP can be disabled by setting their port to 0
30 | * Can set the listen backlog on the commandline (-b)
31 |
32 | === Stats ===
33 |
34 | Handful of new stats.
35 |
36 | ==== evicted_time ====
37 |
38 | Under 'stats items', this lists the time since the last evicted object was
39 | last accessed. If an object was evicted a day after it had last been fetched,
40 | you would see 86400 as the time.
41 |
42 | ==== other stats also noted in 1.3.3 ====
43 |
44 | - accepting_conns
45 | - listen_disabled_num
46 | - cmd_flush
47 |
48 | == other improvements also noted in 1.3.3 ==
49 |
50 | - missing key debugging.
51 | - tail repair.
52 |
53 | === tail repair ===
54 |
55 | Tail repair is an important stability fix, and is worth repeating here.
56 |
57 | There is a rare, unidentified reference leak that causes a slab to be
58 | full of invalid objects that cannot be evicted via the LRU nor will
59 | they expire on their own.
60 |
61 | Tail repair is a strategy by which we forcefully evict objects that
62 | are marked as ``in-use'' (that is, in-flight or otherwise being used),
63 | but haven't been accessed in a long time (currently three hours).
64 |
65 | There is an additional stat that comes along with this (tailrepairs on
66 | a slab) that will allow you to detect that this condition has occurred
67 | on one of your slabs.
68 |
69 | == Bugfixes ==
70 | * use a dedicated accept/dispatch thread.
71 | * prevent starvation by busy threads.
72 | * startup crash fix under certain distros.
73 | * better errors/warnings on the listen code.
74 | * fix listen errors in odd setups (no network, ipv4 only, etc).
75 | * ensure udp works in non-threaded mode.
76 | * update CAS on incr/decr properly.
77 | * incr/decr bugfixes.
78 | * improved tests
79 | * make 'stats slabs' used_checks report correctly
80 |
81 | == Contributors ==
82 |
83 | The following people contributed to this release since 1.2.6. This is not a
84 | measure of the amount of effort per commit, just the total.
85 |
86 | {{{
87 | 18 dormando
88 | 11 Dustin Sallings
89 | 4 Brian Aker
90 | 1 Chris Goffinet
91 | 1 Evan Klitzke
92 | 1 Jonathan Bastien-Filiatrault
93 | 1 Ricky Zhou
94 | }}}
--------------------------------------------------------------------------------
/Resources.wiki:
--------------------------------------------------------------------------------
1 | #summary Resources / Articles / Books
2 |
3 | == Articles ==
4 |
5 | * [http://linuxjournal.com/article/7451 LinuxJournal article by Brad Fitzpatrick] (2004)
6 | * [http://highscalability.com/bunch-great-strategies-using-memcached-and-mysql-better-together fotolog's memcached usage and patterns] (2008)
7 | * [http://www.linuxjournal.com/article/10261 LinuxJournal print column on memcached] (2008)
8 |
9 | == Slides ==
10 |
11 | * [http://www.scribd.com/doc/4069180/Caching-Performance-Lessons-from-Facebook Caching and Performance Lessons from Facebook] (2008)
12 | * [http://www.slideshare.net/vishnu/livejournals-backend-a-history-of-scaling LiveJournal's Backend A History of Scaling]
13 | * [http://mashraqi.com/2008/07/memcached-for-mysql-advanced-use-cases_09.html Memcached for Mysql Advanced Use Cases] by Farhan Mashraqi of Fotolog, 2008
14 | * [http://assets.en.oreilly.com/1/event/2/Just-In-Time%20Scalability_%20Agile%20Methods%20to%20Support%20Massive%20Growth%20Presentation.ppt IMVU's scalability approach: sharding, memcached] (2008) [http://www.speakeasy.org/%7Eericries/Just-In-Time%20Scalability_%20Agile%20Methods%20to%20Support%20Massive%20Growth%20Presentation.pdf PDF version]
15 | * [http://www.slideshare.net/acme/scaling-with-memcached Scaling with memcached] (slides)
16 | * [http://www.bytebot.net/blog/archives/2008/04/14/memcached-and-mysql-tutorial Memcached and Mysql tutorial] (2008)
17 | * [http://download.tangent.org/talks/Memcached%20Study.pdf Memcached Study slides] by Brian Aker and dormando (2008)
18 | * [http://www.slideshare.net/tag/memcached memcached presentations at slideshare]
19 | * If you share a presentation about memcached at slideshare, be sure to tag it with 'memcached'
20 |
21 | == Blog posts ==
22 |
23 | * [http://work.tinou.com/2011/04/memcached-for-dummies.html Memcached for Dummies] (2011)
24 | * [http://www.mysqlperformanceblog.com/2006/09/27/apc-or-memcached APC or Memcached Article - MySQL Performance Blog] (2006)
25 | * [http://blog.leetsoft.com/2007/5/22/the-secret-to-memcached The secret to using memcached] (2007)
26 | * [http://dormando.livejournal.com/tag/memcached Dormando's memcached tag]
27 | * [http://abhinavsingh.com/blog/2009/01/memcached-and-n-things-you-can-do-with-it/ N things you can do with memcached] (2009)
28 | * [http://www.majordojo.com/2007/03/memcached-howto.php How to Dramatically Speed Up Your Web Application: An Introduction to memcached] (2007)
29 | * [http://blogs.sun.com/shanti/entry/memcached_on_nehalem1 Memcached 1.3.2 versus 1.2.6 performance test, 350K ops/second] (2009)
30 |
31 | == Books about or referencing memcached ==
32 |
33 | * [http://pragprog.com/titles/memcd/using-memcached Using Memcached] ebook
34 | * [http://oreilly.com/catalog/9780596101718/index.html High Performance MySQL 2nd ed] (discusses caching in many forms)
35 |
36 | == Docs and Links ==
37 |
38 | * [http://dev.mysql.com/doc/refman/6.0/en/ha-memcached.html Using MySQL with memcached] reference manual chapter
39 | * [http://en.wikipedia.org/wiki/Memcached wikipedia entry on memcached]
40 | * [MemcachedOffspring Spawn of memcached]
41 |
--------------------------------------------------------------------------------
/ReleaseNotes133.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.3.3
2 |
3 | = Memcached 1.3 Beta 3 Release Notes =
4 |
5 | Date: 2009-04-03 Fri
6 |
7 | == Download ==
8 |
9 | Download link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.3.3.tar.gz
12 |
13 | == Features ==
14 |
15 | === Can set listen backlog on the commandline. ===
16 |
17 | Prevent connection refused during connection storms at the cost of
18 | kernel memory.
19 |
20 | === stats settings ===
21 |
22 | Show all current server settings (useful for troubleshooting as well
23 | as internal verification).
24 |
25 | == Bug fixes ==
26 |
27 | * Alignment bug in binary stats (bug26)
28 | * Occasional buffer overflow in stats (bug27)
29 | * Try to recycle memory more aggressively. (bug14)
30 | * incr validation (bug31)
31 | * 64-bit incr/decr delta fixes (bug21)
32 | * ascii UDP set (bug36)
33 | * stats slabs' used chunks (bug29)
34 | * stats reset should reset item stats, eviction counters, etc... (bug22)
35 | * Fix all stat buffer management
36 |
37 | == Misc ==
38 | * More tests
39 | * More/better documentation
40 | * Code cleanup
41 |
42 | == Stable fixes from Dormando ==
43 |
44 | === New Stats ===
45 |
46 | ==== accepting_conns ====
47 |
48 | 1 or 0 to indicate whether the server is currently accepting
49 | connections or not.
50 |
51 | The server will stop accepting connections when it has as many as it's
52 | configured to take.
53 |
54 | ==== listen_disabled_num ====
55 |
56 | The number of times socket listeners were disabled due to hitting the
57 | connection limit.
58 |
59 | ==== cmd_flush ====
60 |
61 | The number of times the flush command was issued.
62 |
63 | === missing key debugging ===
64 |
65 | With verbosity enabled, you can see *why* objects were not found. In
66 | many cases, an item exists under a given key, but is considered
67 | invalid due to lazy expiration or flush.
68 |
69 | === tail repair ===
70 |
71 | There is a rare, unidentified reference leak that causes a slab to be
72 | full of invalid objects that cannot be evicted via the LRU nor will
73 | they expire on their own.
74 |
75 | Tail repair is a strategy by which we forcefully evict objects that
76 | are marked as ``in-use'' (that is, in-flight or otherwise being used),
77 | but haven't been accessed in a long time (currently three hours).
78 |
79 | There is an additional stat that comes along with this (tailrepairs on
80 | a slab) that will allow you to detect that this condition has occurred
81 | on one of your slabs.
82 |
83 | === socket listen bugs ===
84 |
85 | There were some issues listening to sockets on machines with different
86 | network interface configurations (i.e. no network, only ipv4, only
87 | ipv6, etc...).
88 |
89 | == Contributors ==
90 |
91 | The following people contributed to this release since 1.3.2. Please
92 | refer to the 1.3.2 release notes for more info:
93 |
94 | ReleaseNotes133
95 |
96 | {{{
97 | 28 Dustin Sallings
98 | 8 Trond Norbye
99 | 6 dormando
100 | 5 Brad Fitzpatrick
101 | 4 Steve Yen
102 | 1 Eric Lambert
103 | 1 Clinton Webb
104 | 1 Chris Goffinet
105 | }}}
--------------------------------------------------------------------------------
/ReleaseNotes1417.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.17
2 |
3 | = Memcached 1.4.17 Release Notes =
4 |
5 | Date: 2013-12-20
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.17.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Another bugfix release along with some minor new features. Most notable is a
17 | potential fix for a crash bug that has plagued the last few versions. If you
18 | see crashes with memcached, *please* try this version and let us know if you
19 | still see crashes.
20 |
21 | The other notable bug is a SASL authentication bypass glitch. If a client
22 | makes an invalid request with SASL credentials, it will initially fail.
23 | However if you issue a second request with bad SASL credentials, it will
24 | authenticate. This has now been fixed.
25 |
26 | If you see crashes please try the following:
27 |
28 | - Build memcached 1.4.17 from the tarball.
29 | - Run the "memcached-debug" binary that is generated at make time under a gdb
30 | instance
31 | - Don't forget to ignore SIGPIPE in gdb: "handle SIGPIPE nostop"
32 | - Grab a backtrace "thread apply all bt" if it crashes and post it to the
33 | mailing list or otherwise hunt me down.
34 | - Grab "stats", "stats settings", "stats slabs", "stats items" from an
35 | instance that has been running for a while but hasn't crashed yet.
36 |
37 | ... and send as much as you can to the mailing list. If the data is sensitive
38 | to you, please contact dormando privately.
39 |
40 | == Fixes ==
41 |
42 | * Fix potential segfault in incr/decr routine.
43 | * Fix potential unbounded key prints (leading to crashes in logging code)
44 | * Fix bug which allowed invalid SASL credentials to authenticate.
45 | * Fix udp mode when listening on ipv6 addresses.
46 | * Fix for incorrect length of initial value set via binary increment protocol.
47 |
48 | == New Features ==
49 |
50 | * Add linux accept4() support. Removes one syscall for each new tcp
51 | connection.
52 | * scripts/memcached-tool gets "settings" and "sizes" commands.
53 | * Add parameter (-F) to disable flush_all. Useful if you never want to be
54 | able to run a full cache flush on production instances.
55 |
56 | == Contributors ==
57 |
58 | The following people contributed to this release since 1.4.16.
59 |
60 | Note that this is based on who contributed changes, not how they were
61 | done. In many cases, a code snippet on the mailing list or a bug
62 | report ended up as a commit with your name on it.
63 |
64 | Note that this is just a summary of how many changes each person made
65 | which doesn't necessarily reflect how significant each change was.
66 | For details on what led up into a branch, either grab the git repo and
67 | look at the output of `git log 1.4.16..1.4.17` or use a web view.
68 |
69 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
70 | * Web View: http://github.com/memcached/memcached/commits/1.4.17
71 |
72 | {{{
73 | 6 dormando
74 | 1 Adam Szkoda
75 | 1 Alex Leone
76 | 1 Andrey Niakhaichyk
77 | 1 Daniel Pañeda
78 | 1 Jeremy Sowden
79 | 1 Simon Liu
80 | 1 Tomas Kalibera
81 | 1 theblop
82 | 1 伊藤洋也
83 |
84 | }}}
85 |
86 | == Control ==
87 |
--------------------------------------------------------------------------------
/ReleaseNotes1414.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.14
2 |
3 | = Memcached 1.4.14 Release Notes =
4 |
5 | Date: 2012-7-30
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.14.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 |
17 | == Fixes ==
18 |
19 | * fix compile issue with new GCC's
20 | * Added support for automake-1.12 in autogen.sh
21 | * Use Markdown for README.
22 | * Fixed issue with invalid binary protocol touch command expiration time (http://code.google.com/p/memcached/issues/detail?id=275)
23 | * Define touch command probe for DTrace support
24 | * Error and exit if we don't have hugetlb support (changes -L behavior)
25 | * update reassign/automove documentation
26 | * Remove USE_SYSTEM_MALLOC define
27 | * slab rebalancing from random class
28 | * split slab rebalance and automove threads
29 | * pre-split slab pages into slab freelists
30 | * Avoid race condition in test during pid creation by blind retrying
31 |
32 |
33 | == New Features ==
34 |
35 | This release mainly features a number of small bugfixes, but also a change to
36 | slab rebalance behavior.
37 |
38 | Previously, if you moved a slab page from one slab to another, you had to wait
39 | until that new page was fully used before moving another one. That wait has
40 | been removed, and you can move pages as fast as the system can ... move them.
41 |
42 | A few new features as well:
43 |
44 | === slabs reassign ===
45 |
46 | `slabs reassign -1 15` will pick a page from any slab class and move it to
47 | class 15.
48 |
49 | === slabs automove ===
50 |
51 | `slabs automove 2` now enables an ultra aggressive page reassignment
52 | algorithm. On every eviction, it will try to move a slab page into that
53 | class. You should *never* run this in production unless you have a very, very
54 | good idea of what's going to happen. For most people who have spurious
55 | evictions everywhere, you'll end up mass evicting random data and hurting your
56 | hit rate. It can be useful to momentarily enable for emergency situations, or
57 | if you have a data access pattern where evictions should never happen.
58 |
59 | This was work we were planning on doing already, but twitter's rewrite has
60 | people presently interested in trying it out. You've been warned.
61 |
62 | == Contributors ==
63 |
64 | The following people contributed to this release since 1.4.13.
65 |
66 | Note that this is based on who contributed changes, not how they were
67 | done. In many cases, a code snippet on the mailing list or a bug
68 | report ended up as a commit with your name on it.
69 |
70 | Note that this is just a summary of how many changes each person made
71 | which doesn't necessarily reflect how significant each change was.
72 | For details on what led up into a branch, either grab the git repo and
73 | look at the output of `git log 1.4.13..1.4.14` or use a web view.
74 |
75 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
76 | * Web View: http://github.com/memcached/memcached/commits/1.4.14
77 |
78 | {{{
79 | 18 dormando
80 | 1 Clint Byrum
81 | 1 Eric McConville
82 | 1 Fordy
83 | 1 Maksim Zhylinski
84 | 1 Toru Maesaka
85 | 1 yuryur
86 |
87 | }}}
88 |
89 | == Control ==
90 |
--------------------------------------------------------------------------------
/ReleaseNotes1410.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.10
2 |
3 | = Memcached 1.4.10 Release Notes =
4 |
5 | Date: 2011-11-9
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.10.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This release is focused on thread scalability and performance
17 | improvements. This release should be able to feed data back faster than any
18 | network card can support as of this writing.
19 |
20 | == Fixes ==
21 |
22 | * Disable issue 140's test.
23 | * Push cache_lock deeper into item_alloc
24 | * Use item partitioned lock for as much as possible
25 | * Remove the depth search from item_alloc
26 | * Move hash calls outside of cache_lock
27 | * Use spinlocks for main cache lock
28 | * Remove uncommon branch from asciiprot hot path
29 | * Allow all tests to run as root
30 |
31 |
32 | == New Features ==
33 |
34 | === Performance ===
35 |
36 | For more details, read the commit messages from git. Each change was carefully
37 | researched to not increase memory requirements and to be safe from deadlocks.
38 | Each change was individually tested via mc-crusher
39 | (http://github.com/dormando/mc-crusher) to ensure benefits.
40 |
41 | Tested improvements in speed between 3 and 6 worker threads (-t 3
42 | to -t 6). More than -t 6 reduced speed.
43 |
44 | In my tests, set was raised from 300k/s to
45 | around 930k/s. Key fetches/sec (multigets) from 1.6 million/s to around
46 | 3.7 million/s for a quadcore box. A machine with more cores was able to
47 | pull 6 million keys per second. Incr/Decr performance increased similar
48 | to set performance. Non-bulk tests were limited by the packet rate of
49 | localhost or the network card.
50 |
51 | Multiple NUMA nodes reduces performance (but not enough to really
52 | matter). If you want the absolute highest speed, as of this release you can
53 | run one instance per numa node (where n is your core count):
54 |
55 | {{{numactl --cpunodebind=0 memcached -m 4000 -t n }}}
56 |
57 | Older versions of memcached are plenty fast for just about all users. This
58 | changeset is to allow more flexibility in future feature additions, as well as
59 | improve memcached's overall latency on busy systems.
60 |
61 | Keep an eye on your hitrate and performance numbers. Please let us know
62 | immediately if you experience any regression from these changes. We have tried
63 | to be as thorough as possible in testing, but you never know.
64 |
65 | == Contributors ==
66 |
67 | The following people contributed to this release since 1.4.9.
68 |
69 | Note that this is based on who contributed changes, not how they were
70 | done. In many cases, a code snippet on the mailing list or a bug
71 | report ended up as a commit with your name on it.
72 |
73 | Note that this is just a summary of how many changes each person made
74 | which doesn't necessarily reflect how significant each change was.
75 | For details on what led up into a branch, either grab the git repo and
76 | look at the output of `git log 1.4.9..1.4.10` or use a web view.
77 |
78 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
79 | * Web View: http://github.com/memcached/memcached/commits/1.4.10
80 |
81 | {{{
82 | 10 dormando
83 | }}}
84 |
85 | == Control ==
86 |
--------------------------------------------------------------------------------
/ReleaseNotes143rc1.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.4.3-rc1
2 |
3 | = Memcached 1.4.3-rc1 Release Notes =
4 |
5 | Date: 2009-11-01 Sun
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.3_rc1.tar.gz
12 |
13 | == Overview ==
14 |
15 | This is a maintenance release of memcached featuring mostly bug fixes
16 | and one new feature.
17 |
18 | == Fixes ==
19 |
20 | === Critical Fixes ===
21 |
22 | * Malicious input can crash server. bug102
23 |
24 | === Non-critical Fixes ===
25 |
26 | * Removed special case in slab sizing for factor 2. bug56
27 | * Provide better errors for deletion scenarios. bug3
28 | * Fix get stats accounting. bug104
29 | * Ignore stats prefix for keys without a delimiter. bug96
30 | * Work around rpm's broken concept of versions more. bug98
31 | * Use slab class growth factor limit. bug57
32 | * Added LSB section to init script. bug54
33 | * Documentation fixes
34 | * Various build fixes
35 |
36 | === Itemized List of Bugs Closed ===
37 |
38 | If a bug shows up in this list that wasn't specifically mentioned
39 | above, it's either too minor to mention specifically or the bug was
40 | closed by introducing a test that proves that the bug, as described,
41 | does not exist.
42 |
43 | * bug3
44 | * bug54
45 | * bug56
46 | * bug57
47 | * bug62
48 | * bug67
49 | * bug68
50 | * bug69
51 | * bug96
52 | * bug97
53 | * bug98
54 | * bug101
55 | * bug102
56 | * bug104
57 |
58 | == New Features ==
59 |
60 | === Support for SASL Authentication ===
61 |
62 | Some installations of memcached are not in controlled environments
63 | where simple network filtering keeps bad guys out of your stuff. To
64 | help with those other environments, we've introduced SASL support.
65 | You can read more about it here:
66 |
67 | http://code.google.com/p/memcached/wiki/SASLHowto
68 |
69 | === New perl tool `damemtop` in scripts/ ===
70 |
71 | dormando's awesome memcached top - a new commandline perl tool for
72 | monitoring small to large memcached clusters. Supports monitoring
73 | arbitrary statistics. See scripts/README.damemtop for more information.
74 |
75 | This tool is intended to replace memcached-tool, but not yet.
76 |
77 | === Also Noteworthy, Slab Optimizations ===
78 |
79 | Objects on the larger end of the limit should be generally more memory
80 | efficient now as more slabs are created (thus are more granular).
81 |
82 | == Contributors ==
83 |
84 | The following people contributed to this release since 1.4.2.
85 |
86 | Note that this is based on who contributed changes, not how they were
87 | done. In many cases, a code snippet on the mailing list or a bug
88 | report ended up as a commit with your name on it.
89 |
90 | Note that this is just a summary of how many changes each person made
91 | which doesn't necessarily reflect how significant each change was.
92 | For details on what led up into a branch, either grab the git repo and
93 | look at the output of `git log 1.4.2..1.4.3-rc1` or use a web view.
94 |
95 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
96 | * Web View: http://github.com/memcached/memcached/commits/1.4.3-rc1
97 |
98 | {{{
99 | 15 Dustin Sallings
100 | 7 Trond Norbye
101 | 5 dormando
102 | 2 Colin Pitrat
103 | 1 Chang Song
104 | 1 CaptTofu
105 | 1 Monty Taylor
106 | }}}
--------------------------------------------------------------------------------
/ReleaseNotes1416.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.16
2 |
3 | = Memcached 1.4.16 Release Notes =
4 |
5 | Date: 2013-12-9
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.16.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | A quick bugfix release to get the tree moving again after a long absence. I
17 | don't want to make too many changes at once, so here are a number of platform
18 | and crash fixes, as well as some introspection.
19 |
20 | If you run 1.4.16 and experience any sort of memory leak or
21 | segfault/crash/hang, *please* contact us. Please do the following:
22 |
23 | - Build memcached 1.4.16 from the tarball.
24 | - Run the "memcached-debug" binary that is generated at make time under a gdb
25 | instance
26 | - Don't forget to ignore SIGPIPE in gdb: "handle SIGPIPE nostop"
27 | - Grab a backtrace "thread apply all bt" if it crashes and post it to the
28 | mailing list or otherwise hunt me down.
29 | - Grab "stats", "stats settings", "stats slabs", "stats items" from an
30 | instance that has been running for a while but hasn't crashed yet.
31 |
32 | These crashes have been around too long and I would love to get rid of them
33 | soon.
34 |
35 | Thanks!
36 |
37 | == Fixes ==
38 |
39 | * Builds on OS X Mavericks (with clang)
40 | * Add statistics for allocation failures
41 | * Issue 294: Check for allocation failure
42 | * Make tail leak expiry time configurable (-o tail_repair_time=60)
43 | * Fix segfault on specially crafted packet.
44 | * Close connection on update_event error while parsing new commands
45 | * Don't truncate maxbytes stat from 'stats settings'
46 | * Add the "shutdown" command to the server. This allows for better automation
47 | * fix enable-sasl-pwdb
48 |
49 |
50 | == New Features ==
51 |
52 | Adjusting tail repair time:
53 | -o tail_repair_time=60 (in seconds)
54 |
55 | "tail repairs" are a failsafe within memcached where if a cache item is leaked
56 | via an unfixed or obscure bug, the item will be recycled anyway if it ends up
57 | at the bottom of the LRU and hasn't been touched in a long period of time.
58 | Most releases do not have these bugs, but some have so we've left the
59 | mechanism in place. The default time before reaping is 3 hours.
60 | For a busy site that sucks. we've lowered the default to one hour, which is much
61 | longer than any object should ever take to download.
62 |
63 | If you need dead items to be pulled more quickly, use this override. Make sure
64 | you don't set it too low if you have clients which download items very slowly
65 | (unlikely, but eh).
66 |
67 | == Contributors ==
68 |
69 | The following people contributed to this release since 1.4.15.
70 |
71 | Note that this is based on who contributed changes, not how they were
72 | done. In many cases, a code snippet on the mailing list or a bug
73 | report ended up as a commit with your name on it.
74 |
75 | Note that this is just a summary of how many changes each person made
76 | which doesn't necessarily reflect how significant each change was.
77 | For details on what led up into a branch, either grab the git repo and
78 | look at the output of `git log 1.4.15..1.4.16` or use a web view.
79 |
80 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
81 | * Web View: http://github.com/memcached/memcached/commits/1.4.16
82 |
83 | {{{
84 | 5 Trond Norbye
85 | 4 dormando
86 | 2 Brian Aker
87 | 2 Eric McConville
88 | 1 Gabriel A. Samfira
89 | 1 Huzaifa Sidhpurwala
90 | 1 Kenneth Steele
91 | 1 Keyur
92 | 1 Wing Lian
93 | 1 liu bo
94 |
95 | }}}
96 |
97 | == Control ==
98 |
--------------------------------------------------------------------------------
/ReleaseNotes143.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.4.3
2 |
3 | = Memcached 1.4.3 Release Notes =
4 |
5 | Date: 2009-11-07 Sat
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.3.tar.gz
12 |
13 | == Overview ==
14 |
15 | This is a maintenance release of memcached featuring mostly bug fixes
16 | and one new feature.
17 |
18 | === RC history ===
19 |
20 | rc2 fixes a multiget bug that showed up in rc1. A bug was not filed,
21 | but it was found and patched at roughly the same time.
22 |
23 | == Fixes ==
24 |
25 | === Critical Fixes ===
26 |
27 | * Malicious input can crash server. bug102
28 |
29 | === Non-critical Fixes ===
30 |
31 | * Removed special case in slab sizing for factor 2. bug56
32 | * Provide better errors for deletion scenarios. bug3
33 | * Fix get stats accounting. bug104
34 | * Ignore stats prefix for keys without a delimiter. bug96
35 | * Work around rpm's broken concept of versions more. bug98
36 | * Use slab class growth factor limit. bug57
37 | * Added LSB section to init script. bug54
38 | * Documentation fixes
39 | * Various build fixes
40 |
41 | === Itemized List of Bugs Closed ===
42 |
43 | If a bug shows up in this list that wasn't specifically mentioned
44 | above, it's either too minor to mention specifically or the bug was
45 | closed by introducing a test that proves that the bug, as described,
46 | does not exist.
47 |
48 | * bug3
49 | * bug54
50 | * bug56
51 | * bug57
52 | * bug62
53 | * bug67
54 | * bug68
55 | * bug69
56 | * bug96
57 | * bug97
58 | * bug98
59 | * bug101
60 | * bug102
61 | * bug104
62 |
63 | == New Features ==
64 |
65 | === Support for SASL Authentication ===
66 |
67 | Some installations of memcached are not in controlled environments
68 | where simple network filtering keeps bad guys out of your stuff. To
69 | help with those other environments, we've introduced SASL support.
70 | You can read more about it here:
71 |
72 | http://code.google.com/p/memcached/wiki/SASLHowto
73 |
74 | === New perl tool `damemtop` in scripts/ ===
75 |
76 | dormando's awesome memcached top - a new commandline perl tool for
77 | monitoring small to large memcached clusters. Supports monitoring
78 | arbitrary statistics. See scripts/README.damemtop for more information.
79 |
80 | This tool is intended to replace memcached-tool, but not yet.
81 |
82 | === Also Noteworthy, Slab Optimizations ===
83 |
84 | Objects on the larger end of the limit should be generally more memory
85 | efficient now as more slabs are created (thus are more granular).
86 |
87 | == Contributors ==
88 |
89 | The following people contributed to this release since 1.4.2.
90 |
91 | Note that this is based on who contributed changes, not how they were
92 | done. In many cases, a code snippet on the mailing list or a bug
93 | report ended up as a commit with your name on it.
94 |
95 | Note that this is just a summary of how many changes each person made
96 | which doesn't necessarily reflect how significant each change was.
97 | For details on what led up into a branch, either grab the git repo and
98 | look at the output of `git log 1.4.2..1.4.3` or use a web view.
99 |
100 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
101 | * Web View: http://github.com/memcached/memcached/commits/1.4.3
102 |
103 | {{{
104 | 15 Dustin Sallings
105 | 10 Trond Norbye
106 | 5 dormando
107 | 2 Colin Pitrat
108 | 1 Monty Taylor
109 | 1 Chang Song
110 | 1 CaptTofu
111 | 1 Tomash Brechko
112 | }}}
--------------------------------------------------------------------------------
/ReleaseNotes143rc2.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.4.3-rc2
2 |
3 | = Memcached 1.4.3-rc2 Release Notes =
4 |
5 | Date: 2009-11-02 Mon
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.3_rc2.tar.gz
12 |
13 | == Overview ==
14 |
15 | This is a maintenance release of memcached featuring mostly bug fixes
16 | and one new feature.
17 |
18 | === RC history ===
19 |
20 | rc2 fixes a multiget bug that showed up in rc1. A bug was not filed,
21 | but it was found and patched at roughly the same time.
22 |
23 | == Fixes ==
24 |
25 | === Critical Fixes ===
26 |
27 | * Malicious input can crash server. bug102
28 |
29 | === Non-critical Fixes ===
30 |
31 | * Removed special case in slab sizing for factor 2. bug56
32 | * Provide better errors for deletion scenarios. bug3
33 | * Fix get stats accounting. bug104
34 | * Ignore stats prefix for keys without a delimiter. bug96
35 | * Work around rpm's broken concept of versions more. bug98
36 | * Use slab class growth factor limit. bug57
37 | * Added LSB section to init script. bug54
38 | * Documentation fixes
39 | * Various build fixes
40 |
41 | === Itemized List of Bugs Closed ===
42 |
43 | If a bug shows up in this list that wasn't specifically mentioned
44 | above, it's either too minor to mention specifically or the bug was
45 | closed by introducing a test that proves that the bug, as described,
46 | does not exist.
47 |
48 | * bug3
49 | * bug54
50 | * bug56
51 | * bug57
52 | * bug62
53 | * bug67
54 | * bug68
55 | * bug69
56 | * bug96
57 | * bug97
58 | * bug98
59 | * bug101
60 | * bug102
61 | * bug104
62 |
63 | == New Features ==
64 |
65 | === Support for SASL Authentication ===
66 |
67 | Some installations of memcached are not in controlled environments
68 | where simple network filtering keeps bad guys out of your stuff. To
69 | help with those other environments, we've introduced SASL support.
70 | You can read more about it here:
71 |
72 | http://code.google.com/p/memcached/wiki/SASLHowto
73 |
74 | === New perl tool `damemtop` in scripts/ ===
75 |
76 | dormando's awesome memcached top - a new commandline perl tool for
77 | monitoring small to large memcached clusters. Supports monitoring
78 | arbitrary statistics. See scripts/README.damemtop for more information.
79 |
80 | This tool is intended to replace memcached-tool, but not yet.
81 |
82 | === Also Noteworthy, Slab Optimizations ===
83 |
84 | Objects on the larger end of the limit should be generally more memory
85 | efficient now as more slabs are created (thus are more granular).
86 |
87 | == Contributors ==
88 |
89 | The following people contributed to this release since 1.4.2.
90 |
91 | Note that this is based on who contributed changes, not how they were
92 | done. In many cases, a code snippet on the mailing list or a bug
93 | report ended up as a commit with your name on it.
94 |
95 | Note that this is just a summary of how many changes each person made
96 | which doesn't necessarily reflect how significant each change was.
97 | For details on what led up into a branch, either grab the git repo and
98 | look at the output of `git log 1.4.2..1.4.3-rc2` or use a web view.
99 |
100 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
101 | * Web View: http://github.com/memcached/memcached/commits/1.4.3-rc2
102 |
103 | {{{
104 | 15 Dustin Sallings
105 | 8 Trond Norbye
106 | 5 dormando
107 | 2 Colin Pitrat
108 | 1 Monty Taylor
109 | 1 Chang Song
110 | 1 CaptTofu
111 | 1 Tomash Brechko
112 | }}}
--------------------------------------------------------------------------------
/ReleaseNotes1418.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.18
2 |
3 | = Memcached 1.4.18 Release Notes =
4 |
5 | Date: 2014-4-17
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.18.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 |
17 | == Fixes ==
18 |
19 | * fix LRU contention for first minute of uptime
20 | * This made some synthetic benchmarks look awful.
21 | * Make hash table algorithm selectable
22 | * Don't lose item_size_max units in command line
23 | * Add a "stats conns" command to show the states of open connections.
24 | * Allow caller-specific error text in binary protocol
25 | * Stop returning ASCII error messages to binary clients
26 | * Fix reference leak in binary protocol "get" and "touch" handlers
27 | * Fix reference leak in process_get_command()
28 |
29 |
30 | == New Features ==
31 |
32 | === Stats conns ===
33 |
34 | New "stats conns" command, which will show you what currently open connections are up to, how idle they've been, etc.
35 |
36 | === Starttime Hash Algorithm Selection ===
37 |
38 | The jenkins hash was getting a little long in the tooth, and we might want to
39 | add specific hash algorithms for different platforms in the future. This makes
40 | it selectable in some sense. We've initially added murmur3 hash to the lineup
41 | and that seems to run a tiny bit faster in some tests.
42 |
43 | `-o hash_algorithm=murmur3`
44 |
45 | === LRU Crawler ===
46 |
47 | A new background thread emerges! Currently experimental, so the syntax might
48 | change. If you run into bugs please let us know (though it's been testing fine
49 | in torture tests so far).
50 |
51 | If you wish to clean your slab classes of items which have been expired,
52 | either one-time or periodically, this will do it with low impact as a
53 | background operation.
54 |
55 | Currently it requires kicking off a crawl via manual command:
56 |
57 | First, enable the thread:
58 | `lru_crawler enable`
59 | or use `-o lru_crawler` as a starttime option.
60 |
61 | `lru_crawler crawl 1,3,5`
62 |
63 | ... would crawl slab classes 1,3,5 looking for expired items to add to the
64 | freelist.
65 |
66 | This is generally not useful or required, unless you have memory with very
67 | mixed TTLs, you do not fetch items frequently enough or otherwise cause them
68 | to expire, and you don't want items with longer TTLs block reclaiming expired
69 | items, or to be evicted early.
70 |
71 | Future uses of the thread should allow examining and purging items via a
72 | plugin interface: IE crawl all items matching some string and remove them, or
73 | count them. It is simple to modify to experiment with as of now.
74 |
75 | See doc/protocol.txt for full explanation of related commands and counters.
76 |
77 | == Contributors ==
78 |
79 | The following people contributed to this release since 1.4.17.
80 |
81 | Note that this is based on who contributed changes, not how they were
82 | done. In many cases, a code snippet on the mailing list or a bug
83 | report ended up as a commit with your name on it.
84 |
85 | Note that this is just a summary of how many changes each person made
86 | which doesn't necessarily reflect how significant each change was.
87 | For details on what led up into a branch, either grab the git repo and
88 | look at the output of `git log 1.4.17..1.4.18` or use a web view.
89 |
90 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
91 | * Web View: http://github.com/memcached/memcached/commits/1.4.18
92 |
93 | {{{
94 | 21 dormando
95 | 8 Steven Grimm
96 | 1 Andrew Glinskiy
97 |
98 | }}}
99 |
100 | == Control ==
101 |
--------------------------------------------------------------------------------
/SASLAuthProtocol.wiki:
--------------------------------------------------------------------------------
1 | #summary SASL Authentication for Memcached
2 |
3 | = Overview =
4 |
5 | Most deployments of memcached today exist within trusted networks
6 | where clients may freely connect to any server and the servers don't
7 | discriminate against them.
8 |
9 | There are cases, however, where memcached is deployed in untrusted
10 | networks or where administrators would like to exercise a bit more
11 | control over the clients that are connecting.
12 |
13 | This page mostly exists to describe the protocol. If you just want to use it, check out the [SASLHowto SASL howto]
14 |
15 | = Authentication Concepts =
16 |
17 | Authentication is abstracted from the server using the Simple
18 | Authentication and Security Layer.
19 |
20 | Among other things, this provides administrators with consistent
21 | credential management that is mostly independent from the services
22 | that are authenticating clients.
23 |
24 | = Protocol Definitions =
25 |
26 | == Error Codes and Conditions ==
27 |
28 | There are two status codes provided by the SASL protocol to enable
29 | authentication:
30 |
31 | === Unauthorized ===
32 |
33 | If a message is returned with a status code of `0x20`, this is
34 | considered an authentication or authorization failure.
35 |
36 | This may be in response to an explicit authentication command
37 | indicating the credentials were not accepted or the authorization was
38 | otherwise not granted to access the server.
39 |
40 | === Continue Authentication ===
41 |
42 | Some SASL mechanisms require multiple messages to be sent between the
43 | client and server. If a server responds to an authentication message
44 | with a status code of `0x21`, this will indicate your client needs to do
45 | more work to complete the authentication negotiation.
46 |
47 | === Authentication Not Supported ===
48 |
49 | If a server responds to an authentication request indicating the
50 | command is unknown (status `0x81`), it likely doesn't support
51 | authentication. It is generally acceptable for the client to consider
52 | authentication successful when communicating to a server that doesn't
53 | support authentication.
54 |
55 | == Authentication Requests ==
56 |
57 | === List Mechanisms ===
58 |
59 | In order to negotiate authentication, a client may need to ask the
60 | server what authentication mechanisms it supports.
61 |
62 | A command `0x20` with no extras, key, or value will request a mechanism
63 | list from the server. The mechanisms are returned as a
64 | space-separated value.
65 |
66 | === Authentication Request ===
67 |
68 | To begin an authentication request, send a request with command `0x21`,
69 | the requested mechanism as the key, and the initial authentication
70 | data as the value if any is required for the chosen mechanism.
71 |
72 | === Authentication Continuation ===
73 |
74 | If the authentication request responded with a continuation request
75 | (status `0x21`), the body will contain the data needed for computing the
76 | next value in the authentication negotiation.
77 |
78 | The next step's data will be transmitted similarly to the initial
79 | step, but using command `0x22`. Note that this includes the mechanism
80 | within the key as in the initial request.
81 |
82 | = Error Reference =
83 |
84 | || *Status Code* || *Meaning* ||
85 | || 0x20 || Authentication required / Not Successful ||
86 | || 0x21 || Further authentication steps required. ||
87 |
88 |
89 | = Command Reference =
90 |
91 | || *Command* || *Op Code* || *Key* || *Value* ||
92 | || List Mechanisms || 0x20 || None || None ||
93 | || Start Authentication || 0x21 || Mechanism || Auth Data ||
94 | || Authentication Step || 0x22 || Mechanism || Auth Data ||
95 |
96 | = See Also =
97 |
98 | [SASLHowto memcached SASL howto]
99 |
100 | http://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer
--------------------------------------------------------------------------------
/PHPClientComparison.wiki:
--------------------------------------------------------------------------------
1 | #summary Comparison of pecl/memcache and pecl/memcached
2 |
3 | = PHP Client Comparison =
4 |
5 | There are primarily two clients used with PHP. One is the older, more widespread [http://pecl.php.net/package/memcache pecl/memcache] and the other is the newer, less used, more feature rich [http://pecl.php.net/package/memcached pecl/memcached].
6 |
7 | Both support the basics such as multiple servers, setting vaules, getting values, increment, decrement and getting stats.
8 |
9 | Here are some more advanced features and information.
10 |
11 |
12 |
13 | | |
14 | [http://pecl.php.net/package/memcache pecl/memcache] |
15 | [http://pecl.php.net/package/memcached pecl/memcached] |
16 |
17 |
18 | | First Release Date |
19 | 2004-06-08 |
20 | 2009-01-29 (beta) |
21 |
22 |
23 | | Actively Developed? |
24 | Yes |
25 | Yes |
26 |
27 |
28 | | External Dependency |
29 | None |
30 | libmemcached |
31 |
32 |
33 | | Features |
34 |
35 |
36 | | Automatic Key Fixup1 |
37 | Yes |
38 | No |
39 |
40 |
41 | | Append/Prepend |
42 | No |
43 | Yes |
44 |
45 |
46 | | Automatic Serialzation2 |
47 | Yes |
48 | Yes |
49 |
50 |
51 | | Binary Protocol |
52 | No |
53 | Optional |
54 |
55 |
56 | | CAS |
57 | No |
58 | Yes |
59 |
60 |
61 | | Compression |
62 | Yes |
63 | Yes |
64 |
65 |
66 | | Communication Timeout |
67 | Connect Only |
68 | Various Options |
69 |
70 |
71 | | Consistent Hashing |
72 | Yes |
73 | Yes |
74 |
75 |
76 | | Delayed Get |
77 | No |
78 | Yes |
79 |
80 |
81 | | Multi-Get |
82 | Yes |
83 | Yes |
84 |
85 |
86 | | Session Support |
87 | Yes |
88 | Yes |
89 |
90 |
91 | | Set/Get to a specific server |
92 | No |
93 | Yes |
94 |
95 |
96 | | Stores Numerics |
97 | Converted to Strings |
98 | Yes |
99 |
100 |
101 |
102 | # pecl/memcache will convert an invalid key into a valid key for you. pecl/memcached will return false when trying to set/get a key that is not valid.
103 | # You do not have to serialize your objects or arrays before sending them to the set commands. Both clients will do this for you.
--------------------------------------------------------------------------------
/ReleaseNotes1423.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.23
2 |
3 | = Memcached 1.4.23 Release Notes =
4 |
5 | Date: 2015-4-19
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://www.memcached.org/files/memcached-1.4.23.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Major new release with a complete overhaul of the LRU system. Potentially huge benefits in memory efficiency are possible if the new features are enabled. By default the code should behave similar to how it did in all previous versions, though locking is improved and the new code is still used in some ways.
17 |
18 | Please read the feature notes carefully and try it out!
19 |
20 | Real world examples have shown huge memory efficiency increases when using items of mixed TTL's (some short, some long). When all items have unlimited TTLs, hit ratios have still improved by several percent.
21 |
22 | == Fixes ==
23 |
24 | * spinlocks removed since they never seem to improve performance.
25 | * flush_all was not thread safe.
26 | * better handle items refcounted in tail by unlinking them from the LRU's
27 |
28 | == New Features ==
29 |
30 | This release is a reworking of memcached's core LRU algorithm.
31 |
32 | * global cache_lock is gone, LRU's are now independently locked.
33 | * LRU's are now split between HOT, WARM, and COLD LRU's. New items enter the HOT LRU.
34 | * LRU updates only happen as items reach the bottom of an LRU. If active in HOT, stay in HOT, if active in WARM, stay in WARM. If active in COLD, move to WARM.
35 | * HOT/WARM each capped at 32% of memory available for that slab class. COLD is uncapped.
36 | * Items flow from HOT/WARM into COLD.
37 | * A background thread exists which shuffles items between/within the LRU's as capacities are reached.
38 |
39 | The primary goal is to better protect active items from "scanning". items which are never hit again will flow from HOT, through COLD, and out the bottom. Items occasionally active (reaching COLD, but being hit before eviction), move to WARM. There they can stay relatively protected.
40 |
41 | A secondary goal is to improve latency. The LRU locks are no longer used on item reads, only during sets and from the background thread. Also the background thread is likely to find expired items and release them back to the slab class asynchronously, which speeds up new allocations. Further work on the thread should improve this.
42 |
43 | There are a number of new statistics to monitor this. Mainly you'll just want to judge your hit ratio before/after, as well as any potential latency issues.
44 |
45 | To enable: start with `-o lru_maintainer,lru_crawler`
46 |
47 | To adjust percentage of memory reserved for HOT or WARM LRU's (default to 32% each):
48 | `-o lru_maintainer,lru_crawler,hot_lru_pct=32,warm_lru_pct=32`
49 |
50 | A recommended start line:
51 | `-o lru_maintainer,lru_crawler,hash_algorithm=murmur3`
52 |
53 | An extra option: -o expirezero_does_not_evict (when used with lru_maintainer) will make items with an expiration time of 0 unevictable. Take caution as this will crowd out memory available for other evictable items.
54 |
55 | Some caveats exist:
56 |
57 | * Some possible tunables are currently hardcoded.
58 | * Max number of slab classes is now 62, instead of 200. The default slab factor gives 42 classes.
59 |
60 | This is loosely inspired by the 2Q algorithm. More specifically the OpenBSD variant of it: http://www.tedunangst.com/flak/post/2Q-buffer-cache-algorithm
61 |
62 | It's then extended to cope with the fact that memcached items do not behave the same way as a buffer pool. TTL's mean extra scanning/shuffling is done to improve memory efficiency for valid items.
63 |
64 | == Contributors ==
65 |
66 | The following people contributed to this release since 1.4.22.
67 |
68 | Note that this is based on who contributed changes, not how they were
69 | done. In many cases, a code snippet on the mailing list or a bug
70 | report ended up as a commit with your name on it.
71 |
72 | Note that this is just a summary of how many changes each person made
73 | which doesn't necessarily reflect how significant each change was.
74 | For details on what led up into a branch, either grab the git repo and
75 | look at the output of `git log 1.4.22..1.4.23` or use a web view.
76 |
77 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
78 | * Web View: http://github.com/memcached/memcached/commits/1.4.23
79 |
80 | {{{
81 | 31 dormando
82 |
83 | }}}
84 |
85 | == Control ==
86 |
--------------------------------------------------------------------------------
/ReleaseNotes142rc1.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.4.2-rc1
2 |
3 | = Memcached 1.4.2-rc1 Release Notes =
4 |
5 | Date: 2009-10-07 Wed
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.2_rc1.tar.gz
12 |
13 | == Overview ==
14 |
15 | This is a maintenance release consisting primarily of bug fixes.
16 |
17 | == Fixes ==
18 |
19 | === Critical Fixes ===
20 |
21 | * Reject keys larger than 250 bytes in the binary protocol (bug94)
22 | * Bounds checking on stats cachedump (bug92)
23 | * Binary protocol set+cas wasn't returning a new cas ID (bug87)
24 |
25 | === Non-critical Fixes ===
26 |
27 | * Binary quitq didn't actually close the connection (bug84)
28 | * Build fix on CentOS 5 (bug88)
29 | * Slab boundary checking cleanup (bad logic in unreachable code)
30 | * Removed some internal redundancies.
31 | * Use the OS's provided htonll/ntohll if present (bug83)
32 | * Test fixes/cleanup/additions.
33 | * Get hit memory optimizations (bug89)
34 | * Disallow -t options that cause the server to not work (bug91)
35 | * memcached -vv now shows the final slab
36 | * Killed off incomplete slab rebalance feature.
37 | * Better warnings.
38 | * More consistent verbosity in binary and ascii (bug93)
39 |
40 | == New Features ==
41 |
42 | === Support for libhugetlbfs (in Linux) ===
43 |
44 | From http://libhugetlbfs.ozlabs.org/ -
45 |
46 | libhugetlbfs is a library which provides easy access to huge
47 | pages of memory. It is a wrapper for the hugetlbfs file
48 | system.
49 |
50 | If you are running memcached with a very large heap in Linux, this
51 | change will make it available to you. The hugetlbfs HOWTO provides
52 | detailed information on how to configure your Linux system and provide
53 | advice to applications (such as memcached) to make use of it.
54 |
55 | === Support for evictions, evict_time and OOM counts in memcached-tool ===
56 |
57 | memcached-tool is a commandline tool to display information about your
58 | server. It displays more now.
59 |
60 | === Configurable maximum item size ===
61 |
62 | Many people have asked for memcached to be able to store items larger
63 | than 1MB, while it's generally recommended that one _not_ do this, it
64 | is now supported on the commandline.
65 |
66 | A few enlightened folk have also asked for memcached to reduce the
67 | maximum item size. That is also an option.
68 |
69 | The new -I parameter allows you to specify the maximum item size at
70 | runtime. It supports a unit postfix to allow for natural expression
71 | of item size.
72 |
73 | Examples:
74 |
75 | {{{
76 | memcached -I 128k # Refuse items larger than 128k.
77 | memcached -I 10m # Allow objects up to 10MB
78 | }}}
79 |
80 | === New stat: 'evicted_nonzero' ===
81 |
82 | The evicted_nonzero stat is a counter of all of the evictions for
83 | items that had an expiration time greater than zero.
84 |
85 | This can be used to help distinguish "healthy" evictions from
86 | "unhealthy" ones. If all of your evictions are for objects with no
87 | expiration, then they're naturally falling off the LRU as opposed to
88 | being evicted before their maximum expiry that was set at item store
89 | time.
90 |
91 | === Protocol definitions for range protocol ===
92 |
93 | memcached ships with a binary protocol header that can be used when
94 | implementing your own protocol parsers and generators. The structure
95 | definitions and opcodes for the range specification are included in
96 | this header.
97 |
98 | Note that the server _does not_ support these operations.
99 |
100 | == Contributors ==
101 |
102 | The following people contributed to this release since 1.4.1.
103 |
104 | Note that this is based on who contributed changes, not how they were
105 | done. In many cases, a code snippet on the mailing list or a bug
106 | report ended up as a commit with your name on it.
107 |
108 | Note that this is just a summary of how many changes each person made
109 | which doesn't necessarily reflect how significant each change was.
110 | For details on what led up into a branch, either grab the git repo and
111 | look at the output of `git log 1.4.1..1.4.2-rc1` or use a web view.
112 |
113 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
114 | * Web View: http://github.com/memcached/memcached/commits/1.4.2-rc1
115 |
116 | {{{
117 | 12 Dustin Sallings
118 | 10 Trond Norbye
119 | 9 dormando
120 | 1 Vladimir
121 | 1 Ryan Tomayko
122 | 1 Mat Hostetter
123 | 1 Dmitry Isaykin
124 | }}}
--------------------------------------------------------------------------------
/ReleaseNotes142.wiki:
--------------------------------------------------------------------------------
1 | #summary Release Notes for Release 1.4.2
2 |
3 | = Memcached 1.4.2 Release Notes =
4 |
5 | Date: 2009-10-11 Sun
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.2.tar.gz
12 |
13 | == Overview ==
14 |
15 | This is a maintenance release consisting primarily of bug fixes.
16 |
17 | == Fixes ==
18 |
19 | === Critical Fixes ===
20 |
21 | * Reject keys larger than 250 bytes in the binary protocol (bug94)
22 | * Bounds checking on stats cachedump (bug92)
23 | * Binary protocol set+cas wasn't returning a new cas ID (bug87)
24 |
25 | === Non-critical Fixes ===
26 |
27 | * Binary quitq didn't actually close the connection (bug84)
28 | * Build fix on CentOS 5 (bug88)
29 | * Slab boundary checking cleanup (bad logic in unreachable code)
30 | * Removed some internal redundancies.
31 | * Use the OS's provided htonll/ntohll if present (bug83)
32 | * Test fixes/cleanup/additions.
33 | * Get hit memory optimizations (bug89)
34 | * Disallow -t options that cause the server to not work (bug91)
35 | * memcached -vv now shows the final slab
36 | * Killed off incomplete slab rebalance feature.
37 | * Better warnings.
38 | * More consistent verbosity in binary and ascii (bug93)
39 |
40 | == New Features ==
41 |
42 | === Support for libhugetlbfs (in Linux) ===
43 |
44 | From http://libhugetlbfs.ozlabs.org/ -
45 |
46 | libhugetlbfs is a library which provides easy access to huge
47 | pages of memory. It is a wrapper for the hugetlbfs file
48 | system.
49 |
50 | If you are running memcached with a very large heap in Linux, this
51 | change will make it available to you. The hugetlbfs HOWTO provides
52 | detailed information on how to configure your Linux system and provide
53 | advice to applications (such as memcached) to make use of it.
54 |
55 | === Support for evictions, evict_time and OOM counts in memcached-tool ===
56 |
57 | memcached-tool is a commandline tool to display information about your
58 | server. It displays more now.
59 |
60 | === Configurable maximum item size ===
61 |
62 | Many people have asked for memcached to be able to store items larger
63 | than 1MB, while it's generally recommended that one _not_ do this, it
64 | is now supported on the commandline.
65 |
66 | A few enlightened folk have also asked for memcached to reduce the
67 | maximum item size. That is also an option.
68 |
69 | The new -I parameter allows you to specify the maximum item size at
70 | runtime. It supports a unit postfix to allow for natural expression
71 | of item size.
72 |
73 | Examples:
74 |
75 | {{{
76 | memcached -I 128k # Refuse items larger than 128k.
77 | memcached -I 10m # Allow objects up to 10MB
78 | }}}
79 |
80 | === New stat: 'evicted_nonzero' ===
81 |
82 | The evicted_nonzero stat is a counter of all of the evictions for
83 | items that had an expiration time greater than zero.
84 |
85 | This can be used to help distinguish "healthy" evictions from
86 | "unhealthy" ones. If all of your evictions are for objects with no
87 | expiration, then they're naturally falling off the LRU as opposed to
88 | being evicted before their maximum expiry that was set at item store
89 | time.
90 |
91 | === Protocol definitions for range protocol ===
92 |
93 | memcached ships with a binary protocol header that can be used when
94 | implementing your own protocol parsers and generators. The structure
95 | definitions and opcodes for the range specification are included in
96 | this header.
97 |
98 | Note that the server _does not_ support these operations.
99 |
100 | == Contributors ==
101 |
102 | The following people contributed to this release since 1.4.1.
103 |
104 | Note that this is based on who contributed changes, not how they were
105 | done. In many cases, a code snippet on the mailing list or a bug
106 | report ended up as a commit with your name on it.
107 |
108 | Note that this is just a summary of how many changes each person made
109 | which doesn't necessarily reflect how significant each change was.
110 | For details on what led up into a branch, either grab the git repo and
111 | look at the output of `git log 1.4.1..1.4.2` or use a web view.
112 |
113 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
114 | * Web View: http://github.com/memcached/memcached/commits/1.4.2
115 |
116 | {{{
117 | 12 Dustin Sallings
118 | 10 Trond Norbye
119 | 9 dormando
120 | 1 Vladimir
121 | 1 Ryan Tomayko
122 | 1 Mat Hostetter
123 | 1 Jonathan Steinert
124 | 1 Dmitry Isaykin
125 | }}}
--------------------------------------------------------------------------------
/ProposalTagSupport.wiki:
--------------------------------------------------------------------------------
1 | #summary Proposal for internal tag support
2 |
3 | = Proposal =
4 |
5 | * Add_Tag (key, tag_name(s))
6 | * Invalidate_Tag (tag_name) - Not atomic
7 |
8 | == General Synopsis ==
9 | Invalidation of a tag would basically be a ``global_generation = ++tags[tag]'' kind of operation.
10 |
11 | Each cache item contains a space for pointers to tags with their individual generation numbers and a local generation number.
12 | Adding an existing tag to an item must not cause any modification to the item (i.e. check first).
13 |
14 | Each time an item is requested from a cache, the local generation number is compared against the global generation number. If it differs, each tag is checked to ensure the tag generation number equals the number stored for that tag.
15 | If they're all the same, the local generation number is set to the global generation number.
16 |
17 | If they're different, this record doesn't exist.
18 | Protocol should not be updated to support ‘tags’ under add/set/replace/cas.
19 |
20 | == Allocations / Optimizations ==
21 |
22 | * Tag hash table
23 | * Tag array per item (supporting multiple tags per item)
24 | * Thread-lock wise?
25 |
26 | * Global version counter.
27 | * Tag counters
28 | * A single global generation number is used to track invalidation events.
29 | * Tag hash table in general - Biglock?
30 |
31 | Structures in the tag hash should definitely be reusable in a free list, like most of the other structures. Having one or more per key could be massive suck if you're storing small items. Otherwise the goal should still be to avoid malloc/free if at all possible.
32 | Presize the tag table?
33 | Free list the tag name/version linked list?
34 |
35 | Tag array per item?
36 | 8 bytes per item overhead - Counter & Pointer. Whether the pointer is a linked list or an array, the overhead is fixed for non-tag users.
37 | Realloc the item header?
38 | Tag support as experimental
39 | Could probably be a config (not ./configure) option though, and avoid that memory overhead.
40 |
41 | == Lock-free hashtable ==
42 | The hashtable itself can at worst have a lock per bucket. There's an implementation of a lock-free hashtable in java that should be possible in C as well. Perhaps we could just leave this optimization to those who need it since the overlap between people who really need MT and people who really wants tags seems to be quite small so far.
43 | Each tag will have a reference count that is increment every time the tag is successfully added to an item (i.e. must not increment if an item lookup fails or the item already has the tag).
44 |
45 | When the item is deallocated, all tags should be decremented for that item.
46 | When the tags reference count hits zero, we'll pull it from the global map.
47 |
48 | Memory churn may be an issue. Someone who knows allocators better than I do can decide what to do here.
49 | Increments are not atomic across SMP/multi-core.
50 |
51 | * Not only do you have issues with multicore, you have worse issues with SMP because of the costs around synchronizing the variables across the CPU transport (aka the bus between the CPU's).
52 | * When incrementing/changing values you need to wrap the write in a mutex if you want to be sure of the change.
53 | * Short of pinning all of the increments to a single CPU, you're just going to have to deal with synchronizing this state.
54 | * You don't need a mutex if you have CAS. Java's AtomicInteger is implemented using a volatile integer (volatile mostly means that it can't be stored in a CPU cache, but also is used to establish a happens-before relationship with other threads on reads and writes).
55 |
56 | So, given a facility for cache line synchronization
57 | and a CAS, I imagine you'll end up with a lot of code that looks like
58 | this (from Java's AtomicInteger):
59 | {{{
60 | public final int addAndGet(int delta) {
61 | for (;;) {
62 | int current = get();
63 | int next = current + delta;
64 | if (compareAndSet(current, next))
65 | return next;
66 | }
67 | }
68 | }}}
69 |
70 | glib has something similar as well. It's not guaranteed to be lock-free, but it can be done on a few platforms anyway.
71 |
72 | * Use APR's atomic apr_atomic_inc32 or Windows' interlocked InterlockedIncrement. These use CPU-specific instructions to perform the increment atomically with respect to other threads. No mutexes or other synchronization is required.
73 |
74 | === Quick Notes By Dormando ===
75 |
76 | * libatomic_ops for atomic operations.
77 | * must be fully switchable at runtime... if you switch off tag support, zero extra memory is allocated/used.
--------------------------------------------------------------------------------
/NewCommands.wiki:
--------------------------------------------------------------------------------
1 | #summary Make Me a Sandwich
2 |
3 |
4 |
5 | Memcached handles a small number of basic commands.
6 |
7 | Full documentation can be found in the [NewProtocols Protocol Documentation].
8 |
9 | == Standard Protocol ==
10 |
11 | The "standard protocol stuff" of memcached involves running a command against an "item". An item consists of:
12 |
13 | * A key (arbitrary string up to 250 bytes in length. No space or newlines for ASCII mode)
14 | * A 32bit "flag" value
15 | * An expiration time, in seconds. Can be up to 30 days. After 30 days, is treated as a unix timestamp of an exact date.
16 | * A 64bit "CAS" value, which is kept unique.
17 | * Arbitrary data
18 |
19 | CAS is optional (can be disabled entirely with `-C`, and there are more fields that internally make up an item, but these are what your client interacts with.
20 |
21 | === No Reply ===
22 |
23 | Most ASCII commands allow a "noreply" version. One should not normally use this with the ASCII protocol, as it is impossible to align errors with requests. The intent is to avoid having to wait for a return packet after executing a mutation command (such as a set or add).
24 |
25 | The binary protocol properly implements noreply (quiet) statements. If you have a client which supports or uses the binary protocol, odds are good you may take advantage of this.
26 |
27 | == Storage Commands ==
28 |
29 | === set ===
30 |
31 | Most common command. Store this data, possibly overwriting any existing data. New items are at the top of the LRU.
32 |
33 | === add ===
34 |
35 | Store this data, only if it does not already exist. New items are at the top of the LRU. If an item already exists and an add fails, it promotes the item to the front of the LRU anyway.
36 |
37 | === replace ===
38 |
39 | Store this data, but only if the data already exists. Almost never used, and exists for protocol completeness (set, add, replace, etc)
40 |
41 | === append ===
42 |
43 | Add this data after the last byte in an existing item. This does not allow you to extend past the item limit. Useful for managing lists.
44 |
45 | === prepend ===
46 |
47 | Same as append, but adding new data before existing data.
48 |
49 | === cas ===
50 |
51 | Check And Set (or Compare And Swap). An operation that stores data, but only if no one else has updated the data since you read it last. Useful for resolving race conditions on updating cache data.
52 |
53 | == Retrieval Commands ==
54 |
55 | === get ===
56 |
57 | Command for retrieving data. Takes one or more keys and returns all found items.
58 |
59 | === gets ===
60 |
61 | An alternative get command for using with CAS. Returns a CAS identifier (a unique 64bit number) with the item. Return this value with the `cas` command. If the item's CAS value has changed since you `gets`'ed it, it will not be stored.
62 |
63 | == delete ==
64 |
65 | Removes an item from the cache, if it exists.
66 |
67 | == incr/decr ==
68 |
69 | Increment and Decrement. If an item stored is the string representation of a 64bit integer, you may run incr or decr commands to modify that number. You may only incr by positive values, or decr by positive values. They does not accept negative values.
70 |
71 | If a value does not already exist, incr/decr will fail.
72 |
73 | == Statistics ==
74 |
75 | There're a handful of commands that return counters and settings of the memcached server. These can be inspected via a large array of tools or simply by telnet or netcat. These are further explained in the protocol docs.
76 |
77 | === stats ===
78 |
79 | ye 'ole basic stats command.
80 |
81 | === stats items ===
82 |
83 | Returns some information, broken down by slab, about items stored in memcached.
84 |
85 | === stats slabs ===
86 |
87 | Returns more information, broken down by slab, about items stored in memcached. More centered to performance of a slab rather than counts of particular items.
88 |
89 | === stats sizes ===
90 |
91 | A special command that shows you how items would be distributed if slabs were broken into 32byte buckets instead of your current number of slabs. Useful for determining how efficient your slab sizing is.
92 |
93 | *WARNING* this is a development command. As of 1.4 it is still the only command which will lock your memcached instance for some time. If you have many millions of stored items, it can become unresponsive for several minutes. Run this at your own risk. It is roadmapped to either make this feature optional or at least speed it up.
94 |
95 | == flush_all ==
96 |
97 | Invalidate all existing cache items. Optionally takes a parameter, which means to invalidate all items after N seconds have passed.
98 |
99 | This command does not pause the server, as it returns immediately. It does not free up or flush memory at all, it just causes all items to expire.
100 |
--------------------------------------------------------------------------------
/NewCommonFeatures.wiki:
--------------------------------------------------------------------------------
1 | #summary Dandelions
2 |
3 |
4 |
5 | Clients have a set of common features that they share. The intent here is to give you an overview of how typical clients behave, and some helpful features to look for.
6 |
7 | This section does not describe which clients implement what, but merely what most do implement.
8 |
9 | == Hashing ==
10 |
11 | All clients should be able to hash keys across multiple servers.
12 |
13 | == Consistent Hashing ==
14 |
15 | Most clients have the ability to use consistent hashing, either natively or via an external library.
16 |
17 | == Storing Binary Data or Strings ==
18 |
19 | If passed a flat string or binary data, all clients should be able to store these via set/add/etc commands.
20 |
21 | == Serialization of Data Structures ==
22 |
23 | Most clients are able to accept complex data structures when passed in via set/add/etc commands. They are serialized (usually via some form of native system), a special flag is set, and then the data is stored.
24 |
25 | Clients are *not* able to store all types of complex structures. Objects are usually not serializable, such as row objects returned from a mysql query. You must turn the data into a pure array, or hash/table type structure before being able to store or retrieve it.
26 |
27 | Since item flags are used when storing the item, the same client is able to know whether or not to deserialize a value before returning it on a 'get'. You don't have to do anything special.
28 |
29 | == Compression ==
30 |
31 | Most clients are able to compress data being sent to or from a server. They set a special flag bit if data is over a certain size threshold, or it is specifically requested. Then compress the data and store it.
32 |
33 | Since item flags are used, the clients will automatically know whether or not to decompress the value on return.
34 |
35 | == Timeouts ==
36 |
37 | Various timeouts, including timeouts while waiting for a connection to establish, or timeouts while waiting for a response.
38 |
39 | == Mutations ==
40 |
41 | Standard mutations as listed in [NewCommands Commands]
42 |
43 | == Get ==
44 |
45 | Standard fetch commands as listed in [NewCommands Commands]
46 |
47 | == Multi-Get ==
48 |
49 | Most clients implement a form of multi-get. Exactly how the multi-get is implemented will vary a bit.
50 |
51 | Given a set of 10 keys you wish to fetch, if you have three servers, you may end up with 3-4 keys being fetched from each server.
52 |
53 | * Keys are first sorted into which servers they map onto
54 | * Gets are issued to each server with the list of keys for each. This attempts to be efficient.
55 | * Depending on the client, it might write to all servers in parallel, or it might contact one at a time and wait for responses before moving on.
56 |
57 | Find that your client does the latter? Complain to the author ;)
58 |
59 | = Less Common Features =
60 |
61 | == Get-By-Group-Key ==
62 |
63 | In the above case of Multi-Get, sometimes having your keys spread out among all servers doesn't make quite as much sense.
64 |
65 | For example, we're building a list of keys to fetch to display a user's profile page. Their name, age, bio paragraph, IM contact info, etc.
66 |
67 | If you have 50 memcached servers, issuing a Multi-Get will end up writing each key to individual servers.
68 |
69 | However, you may choose to store data by an intermediate "key". This group key is used by the client to discover which server to store or retrieve the data. Then any keys you supply are all sent to that same server. The group key is *not* retained on the server, or added to the existing key in any way.
70 |
71 | So in a final case, we have 50 memcached servers. Keys pertaining to a single user's profile are stored under the group key, which is their userid. Issuing a Multi-Get for all of their data will end up sending a single command to a single server.
72 |
73 | This isn't the best for everything. Sufficiently large requests may be more efficiently split among servers. If you're fetching fixed set of small values, it'll be more efficient on your network to send a few packets back and forth to a single server, instead of many small packets from many servers.
74 |
75 | == Noreply/Quiet ==
76 |
77 | Depending on if noreply is implemented via ascii or not, it may be difficult to troubleshoot errors, so be careful.
78 |
79 | Noreply is used when you wish to issue mutations to a server but not sit around waiting for the response. This can help cut roundtrip wait times to other servers. You're able to blindly set items, or delete items, in cases where you don't really need to know if the command succeeds or not.
80 |
81 | == Multi-Set ==
82 |
83 | New in some binary protocol supporting clients. Multi-Set is an extension of the "quiet" mode noted above. With the binary protocol many commands may be packed together and issued in bulk. The server will respond only when the responses are interesting (such as failure conditions). This can end up saving a lot of time while updating many items.
84 |
--------------------------------------------------------------------------------
/NewHardware.wiki:
--------------------------------------------------------------------------------
1 | #summary Bling Bling
2 |
3 |
4 |
5 | = Hardware Requirements =
6 |
7 | Memcached is easy to spec out hardware for. In short, it is generally low on CPU usage, will take as much memory as you give it, and network usage will vary from mild to moderate, depending on the average size of your items.
8 |
9 | == CPU Requirements ==
10 |
11 | Memcached is typically light on CPU usage, due to its goal to respond very fast. Memcached is multithreaded, defaulting to 4 worker threads. This doesn't necessarily mean you have to run 100 cores to have memcached meet your needs. If you're going to need to rely on memcached's multithreading, you'll know it. For the common case, any bits of CPU anywhere is usually sufficient.
12 |
13 | == RAM Requirements ==
14 |
15 | The major point of memcached is to sew together sections of memory from multiple hosts and make your app see it as one large section of memory. The more memory the better. However, don't take memory away from other services that might benefit from it.
16 |
17 | It is helpful to have each memcached server have roughly the same amount of memory available. Cluster uniformity means you can simply add and remove servers without having to care about one's particular "weight", or having one server hurt more if it is lost.
18 |
19 | === Avoid Swapping ===
20 |
21 | Assign physical memory, with a few percent extra, to a memcached server. Do not over-allocate memory and expect swap to save you. Performance will be very, very poor. Take extra care to monitor if your server is using swap, and tune if necessary.
22 |
23 | === Is High Speed RAM Necessary? ===
24 |
25 | Not so much, no. Getting that extra high speed memory will not likely net you visible benefits, unless you are issuing extremely high read traffic to memcached.
26 |
27 | = Hardware Layouts =
28 |
29 | == Running Memcached on Webservers ==
30 |
31 | An easy layout is to use spare memory on webservers or compute nodes that you may have. If you buy a webserver with 4G of RAM, but your app and OS only use 2G of RAM at most, you could assign 1.5G or more to memcached instances.
32 |
33 | This has a good tradeoff of spreading memory out more thinly, so losing any one webserver will not cause as much pain.
34 |
35 | Caveats being extra maintenance, and keeping an eye on your application's multi-get usage, as it can end up accessing every memcached in your list. You also run a risk of pushing a machine into swap or killing memcached if your app has a memory leak. Often it's a good idea to run hosts with very little swap, or no swap at all. Better to let an active service die than have it turn into a tarpit.
36 |
37 | == Running Memcached on Databases ==
38 |
39 | Not a great idea. Don't do it. If you have a database host, give as much ram as possible to it. When cache misses do happen, you'll get more benefit from ensuring your indexes and data are already in memory.
40 |
41 | == Using Dedicated Hosts ==
42 |
43 | Using dedicated hardware for memcached means you don't have to worry about other programs on the machine interfering with memcached. You can put a lot of memory (64G+) into a single host and have fewer machines for your memory requirements.
44 |
45 | This has an added benefit of being able to more easily expand large amounts of memory space. Instead of adding new webservers that may go idle, you can add specialized machines to throw gobs of RAM at the problem.
46 |
47 | This ends up having several caveats. The more you compress down your memcached cluster, the more pain you will feel when a host dies.
48 |
49 | Lets say you have a cache hitrate of 90%. If you have 10 memcached servers, and 1 dies, your hitrate may drop to 82% or so. If 10% of your cache misses are getting through, having that jump to 18% or 20% means your backend is suddenly handling *twice* as many requests as before. Actual impact will vary since databases are still decent at handling repeat queries, and your typical cache miss will often be items that the database would have to look up regardless. Still, *twice*!
50 |
51 | So lets say you buy a bunch of servers with 144G of ram, but you can only afford 4 of them. Now when you lose a single server, 25% of your cache goes away, and your hitrate can tank even harder.
52 |
53 | == Capacity Planning ==
54 |
55 | Given the above notes on hardware layouts, be sure you practice good capacity planning. Get an idea for how many servers can be lost before your application is overwhelmed. Make sure you always have more than that.
56 |
57 | If you cannot take down memcached instances, you ensure that upgrades (hardware or software), and normal failures are excessively painful. Save yourself some anguish and plan ahead.
58 |
59 | == Network ==
60 |
61 | Network requirements will vary greatly by the average size of your memcached items. Your application should aim to keep them small, as it can mean the difference between being fine with gigabit inter-switch uplinks, or being completely toast.
62 |
63 | Most deployments will have low requirements (< 10mbps per instance), but a heavy hit service can be quite challenging to support. That said, if you're resorting to infiniband or 10 gigabit ethernet to hook up your memcached instances, you could probably benefit from spreading them out more.
64 |
65 |
--------------------------------------------------------------------------------
/MemoryManagement.wiki:
--------------------------------------------------------------------------------
1 | #summary Description of memcached memory management
2 |
3 | = Introduction =
4 |
5 | This page describes the theory behind the slabber. It's somewhat dated and needs a bit of work.
6 |
7 | The following section is from an email from Anatoly Vorobey.
8 |
9 | = About the Slabber =
10 |
11 | There's no documentation yet, and it's worth mentioning that this
12 | subsystem is the most active area of memcached under development at the
13 | moment (however, all the changes to it won't modify the way memcached
14 | presents itself towards clients, they're primarily directed at making
15 | memcached use memory more efficiently).
16 |
17 | Here's a quick recap of what it does now and what is being worked
18 | on.
19 |
20 | The primary goal of the slabs subsystem in memcached was to eliminate
21 | memory fragmentation issues totally by using fixed-size memory chunks
22 | coming from a few predetermined size classes (early versions of
23 | memcached relied on malloc()'s handling of fragmentation which proved
24 | woefully inadequate for our purposes). For instance, suppose
25 | we decide at the outset that the list of possible sizes is: 64 bytes,
26 | 128 bytes, 256 bytes, etc. - doubling all the way up to 1Mb. For each
27 | size class in this list (each possible size) we maintain a list of free
28 | chunks of this size. Whenever a request comes for a particular size,
29 | it is rounded up to the closest size class and a free chunk is taken
30 | from that size class. In the above example, if you request from the
31 | slabs subsystem 100 bytes of memory, you'll actually get a chunk 128
32 | bytes worth, from the 128-bytes size class. If there are no free chunks
33 | of the needed size at the moment, there are two ways to get one: 1) free
34 | an existing chunk in the same size class, using LRU queues to free the
35 | least needed objects; 2) get more memory from the system, which we
36 | currently always do in _slabs_ of 1Mb each; we malloc() a slab, divide
37 | it to chunks of the needed size, and use them.
38 |
39 | The tradeoff is between memory fragmentation and memory utilisation. In
40 | the scheme we're now using, we have zero fragmentation, but a relatively
41 | high percentage of memory is wasted. The most efficient way to reduce
42 | the waste is to use a list of size classes that closely matches (if
43 | that's at all possible) common sizes of objects that the clients
44 | of this particular installation of memcached are likely to store.
45 | For example, if your installation is going to store hundreds of
46 | thousands of objects of the size exactly 120 bytes, you'd be much better
47 | off changing, in the "naive" list of sizes outlined above, the class
48 | of 128 bytes to something a bit higher (because the overhead of
49 | storing an item, while not large, will push those 120-bytes objects over
50 | 128 bytes of storage internally, and will require using 256 bytes for
51 | each of them in the naive scheme, forcing you to waste almost 50% of
52 | memory). Such tinkering with the list of size classes is not currently
53 | possible with memcached, but enabling it is one of the immediate goals.
54 |
55 | Ideally, the slabs subsystem would analyze at runtime the common sizes
56 | of objects that are being requested, and would be able to modify the
57 | list of sizes dynamically to improve memory utilisation. This is not
58 | planned for the immediate future, however. What is planned is the
59 | ability to reassign slabs to different classes. Here's what this means.
60 | Currently, the total amount of memory allocated for each size class is
61 | determined by how clients interact with memcached during the initial
62 | phase of its execution, when it keeps malloc()'ing more slabs and
63 | dividing them into chunks, until it hits the specified memory limit
64 | (say, 2Gb, or whatever else was specified). Once it hits the limit, to
65 | allocate a new chunk it'll always delete an existing chunk of the same
66 | size (using LRU queues), and will never malloc() or free() any memory
67 | from/to the system. So if, for example, during those initial few hours
68 | of memcached's execution your clients mainly wanted to store very small
69 | items, the bulk of memory allocated will be divided to small-sized
70 | chunks, and the large size classes will get fewer memory, therefore the
71 | life-cycle of large objects you'll store in memcached will henceforth
72 | always be much shorter, with this instance of memcached (their LRU
73 | queues will be shorter and they'll be pushed out much more often). In
74 | general, if your system starts producing a different pattern of common
75 | object sizes, the memcached servers will become less efficient, unless
76 | you restart them. Slabs reassignment, which is the next feature being
77 | worked on, will ensure the server's ability to reclaim a slab (1Mb of
78 | memory) from one size class and put it into another class size, where
79 | it's needed more.
80 |
81 | = TLB =
82 |
83 |
84 | When started with -L memcached will try to enable large memory
85 | pages, and preallocate all memory up front. By using large memory
86 | pages memcached could reduce the number of TLB misses (depending
87 | on the access pattern), and hence improve performance.
88 |
89 | See http://en.wikipedia.org/wiki/Translation_lookaside_buffer for
90 | a description of TLB.
--------------------------------------------------------------------------------
/ReleaseNotes1411beta1.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.11-beta1
2 |
3 | = Memcached 1.4.11-beta1 Release Notes =
4 |
5 | Date: 2012-1-8
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.11_beta1.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a beta release for 1.4.11. Some changes remain to happen before 1.4.11
17 | goes final, but please help test the beta.
18 |
19 | == Fixes ==
20 |
21 | * Fix a race condition from 1.4.10 on item_remove
22 | * fix braindead linked list fail
23 | * close some idiotic race conditions
24 | * reap items on read for slab mover
25 | * initial slab automover
26 | * slab reassignment
27 | * clean do_item_get logic a bit. fix race.
28 | * clean up the do_item_alloc logic
29 | * shorten lock for item allocation more
30 | * Fix to build with cyrus sasl 2.1.25
31 |
32 |
33 | == New Features ==
34 |
35 | Slab page reassignment and bug fixes over 1.4.10.
36 |
37 | === Bug Fixes ===
38 |
39 | There were some race conditions and logic errors introduced in 1.4.10, they
40 | should be rare, but users are strongly encouraged to upgrade.
41 |
42 | === Slab Reassign ===
43 |
44 | Long running instances of memcached may run into an issue where all available
45 | memory has been assigned to a specific slab class (say items of roughly size
46 | 100 bytes). Later the application starts storing more of its data into a
47 | different slab class (items around 200 bytes). Memcached could not use the 100
48 | byte chunks to satisfy the 200 byte requests, and thus you would be able to
49 | store very few 200 byte items.
50 |
51 | 1.4.11 introduces the ability to reassign slab pages. This is a *beta* feature
52 | and the commands may change for the next few releases, so *please* keep this
53 | in mind. When the commands are finalized they will be noted in the release
54 | notes.
55 |
56 | Enable slab reassign on startup:
57 |
58 | {{{$ memcached -o slab_reassign}}}
59 |
60 | Once all memory has been assigned and used by items, you may use a command to
61 | reassign memory.
62 |
63 | {{{$ echo "slabs reassign 1 4" | nc localhost 11211}}}
64 |
65 | That will return an error code indicating success, or a need to retry later.
66 | Success does not mean that the slab was moved, but that a background thread
67 | will attempt to move the memory as quickly as it can.
68 |
69 | === Slab Automove ===
70 |
71 | While slab reassign is a manual feature, there is also the start of an
72 | automatic memory reassignment algorithm.
73 |
74 | {{{$ memcached -o slab_reassign,slab_automove}}}
75 |
76 | The above enables it on startup, and it may also be enabled or disabled at
77 | runtime:
78 |
79 | {{{$ echo "slabs automove 0" | nc localhost 11211}}}
80 |
81 | The algorithm is slow and conservative. If a slab class is seen as having the
82 | highest eviction count 3 times 10 seconds apart, it will take a page from a
83 | slab class which has had zero evictions in the last 30 seconds and move the
84 | memory.
85 |
86 | There are lots of cases where this will not be sufficient, and we invite the
87 | community to help improve upon the algorithm. Included in the source directory
88 | is `scripts/mc_slab_mover`. See perldoc for more information:
89 |
90 | {{{$ perldoc ./scripts/mc_slab_mover}}}
91 |
92 | It implements the same algorithm as built into memcached, and you may modify
93 | it to better suit your needs and improve on the script or port it to other
94 | languages. Please provide patches!
95 |
96 | === Slab Reassign Implementation ===
97 |
98 | Slab page reassignment requires some tradeoffs:
99 |
100 | * All items larger than 500k (even if they're under 730k) take 1MB of space
101 |
102 | * When memory is reassigned, all items that were in the 1MB page are evicted
103 |
104 | * When slab reassign is enabled, an extra background thread is used
105 |
106 | The first item will be improved in later releases, and is avoided if you start
107 | memcached without the -o slab_reassign option.
108 |
109 | === New Stats ===
110 |
111 | {{{
112 | STAT slab_reassign_running 0
113 | STAT slabs_moved 0
114 | }}}
115 |
116 | slab_reassign_running indicates if the slab thread is attempting to move a
117 | page. It may need to wait for some memory to free up, so it could take several
118 | seconds.
119 |
120 | slabs_moved is simply a count of how many pages have been successfully moved.
121 |
122 | == Contributors ==
123 |
124 | The following people contributed to this release since 1.4.10.
125 |
126 | Note that this is based on who contributed changes, not how they were
127 | done. In many cases, a code snippet on the mailing list or a bug
128 | report ended up as a commit with your name on it.
129 |
130 | Note that this is just a summary of how many changes each person made
131 | which doesn't necessarily reflect how significant each change was.
132 | For details on what led up into a branch, either grab the git repo and
133 | look at the output of `git log 1.4.10..1.4.11-beta1` or use a web view.
134 |
135 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
136 | * Web View: http://github.com/memcached/memcached/commits/1.4.11-beta1
137 |
138 | {{{
139 | 10 dormando
140 | 1 Steve Wills
141 |
142 | }}}
143 |
144 | == Control ==
145 |
--------------------------------------------------------------------------------
/ReleaseNotes1411rc1.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.11-rc1
2 |
3 | = Memcached 1.4.11-rc1 Release Notes =
4 |
5 | Date: 2012-1-11
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.11_rc1.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | This is a release candidate for 1.4.11. Please help test!
17 |
18 | == Fixes ==
19 |
20 | * bug237: Don't compute incorrect argc for timedrun
21 | * fix 'age' stat for stats items
22 | * binary deletes were not ticking stats counters
23 | * Fix a race condition from 1.4.10 on item_remove
24 | * close some idiotic race conditions
25 | * initial slab automover
26 | * slab reassignment
27 | * clean do_item_get logic a bit. fix race.
28 | * clean up the do_item_alloc logic
29 | * shorten lock for item allocation more
30 | * Fix to build with cyrus sasl 2.1.25
31 |
32 |
33 | == New Features ==
34 |
35 | Slab page reassignment and bug fixes over 1.4.10.
36 |
37 | === Bug Fixes ===
38 |
39 | There were some race conditions and logic errors introduced in 1.4.10, they
40 | should be rare, but users are strongly encouraged to upgrade.
41 |
42 | === Slab Reassign ===
43 |
44 | Long running instances of memcached may run into an issue where all available
45 | memory has been assigned to a specific slab class (say items of roughly size
46 | 100 bytes). Later the application starts storing more of its data into a
47 | different slab class (items around 200 bytes). Memcached could not use the 100
48 | byte chunks to satisfy the 200 byte requests, and thus you would be able to
49 | store very few 200 byte items.
50 |
51 | 1.4.11 introduces the ability to reassign slab pages. This is a *beta* feature
52 | and the commands may change for the next few releases, so *please* keep this
53 | in mind. When the commands are finalized they will be noted in the release
54 | notes.
55 |
56 | Enable slab reassign on startup:
57 |
58 | {{{$ memcached -o slab_reassign}}}
59 |
60 | Once all memory has been assigned and used by items, you may use a command to
61 | reassign memory.
62 |
63 | {{{$ echo "slabs reassign 1 4" | nc localhost 11211}}}
64 |
65 | That will return an error code indicating success, or a need to retry later.
66 | Success does not mean that the slab was moved, but that a background thread
67 | will attempt to move the memory as quickly as it can.
68 |
69 | === Slab Automove ===
70 |
71 | While slab reassign is a manual feature, there is also the start of an
72 | automatic memory reassignment algorithm.
73 |
74 | {{{$ memcached -o slab_reassign,slab_automove}}}
75 |
76 | The above enables it on startup, and it may also be enabled or disabled at
77 | runtime:
78 |
79 | {{{$ echo "slabs automove 0" | nc localhost 11211}}}
80 |
81 | The algorithm is slow and conservative. If a slab class is seen as having the
82 | highest eviction count 3 times 10 seconds apart, it will take a page from a
83 | slab class which has had zero evictions in the last 30 seconds and move the
84 | memory.
85 |
86 | There are lots of cases where this will not be sufficient, and we invite the
87 | community to help improve upon the algorithm. Included in the source directory
88 | is `scripts/mc_slab_mover`. See perldoc for more information:
89 |
90 | {{{$ perldoc ./scripts/mc_slab_mover}}}
91 |
92 | It implements the same algorithm as built into memcached, and you may modify
93 | it to better suit your needs and improve on the script or port it to other
94 | languages. Please provide patches!
95 |
96 | === Slab Reassign Implementation ===
97 |
98 | Slab page reassignment requires some tradeoffs:
99 |
100 | * All items larger than 500k (even if they're under 730k) take 1MB of space
101 |
102 | * When memory is reassigned, all items that were in the 1MB page are evicted
103 |
104 | * When slab reassign is enabled, an extra background thread is used
105 |
106 | The first item will be improved in later releases, and is avoided if you start
107 | memcached without the -o slab_reassign option.
108 |
109 | === New Stats ===
110 |
111 | {{{
112 | STAT slab_reassign_running 0
113 | STAT slabs_moved 0
114 | }}}
115 |
116 | slab_reassign_running indicates if the slab thread is attempting to move a
117 | page. It may need to wait for some memory to free up, so it could take several
118 | seconds.
119 |
120 | slabs_moved is simply a count of how many pages have been successfully moved.
121 |
122 | == Contributors ==
123 |
124 | The following people contributed to this release since 1.4.10.
125 |
126 | Note that this is based on who contributed changes, not how they were
127 | done. In many cases, a code snippet on the mailing list or a bug
128 | report ended up as a commit with your name on it.
129 |
130 | Note that this is just a summary of how many changes each person made
131 | which doesn't necessarily reflect how significant each change was.
132 | For details on what led up into a branch, either grab the git repo and
133 | look at the output of `git log 1.4.10..1.4.11-rc1` or use a web view.
134 |
135 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
136 | * Web View: http://github.com/memcached/memcached/commits/1.4.11-rc1
137 |
138 | {{{
139 | 15 dormando
140 | 1 Dustin Sallings
141 | 1 Steve Wills
142 | }}}
143 |
144 | == Control ==
145 |
--------------------------------------------------------------------------------
/ReleaseNotes148rc1.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.8-rc1
2 |
3 | = Memcached 1.4.8-rc1 Release Notes =
4 |
5 | Date: 2011-09-28
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.8_rc1.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Feature and bugfix release. New Touch commands, counters, and a change to
17 | connection limit functionality.
18 |
19 | == Fixes ==
20 |
21 | * Upgrade stats items counters to 64bit
22 | * Add new stats expired_unfetched, evicted_unfetched
23 | * Allow setting initial size of the hash table
24 | * Expose stats for the internal hash table
25 | * bug220: incr would sometimes return the previous item's CAS
26 | * Fixed bug on multi get processing
27 | * Experimental maxconns_fast option
28 | * Add an ASCII touch command
29 | * Add binary GATK/GATKQ
30 | * Backport binary TOUCH/GAT/GATQ commands
31 | * Issue 221: Increment treats leading spaces as 0
32 | * Fix compile error on OS X
33 |
34 | == New Features ==
35 |
36 | === Touch Commands ===
37 |
38 | Binary Touch/GAT commands were backported from 1.6. New GATK/GATKQ commands
39 | were added for completeness. Finally, an Ascii protocol `touch` command was
40 | also added.
41 |
42 | Touch commands are used to update the expiration time of an existing item
43 | without fetching it. Say you have a counter set to expire in five minutes, but
44 | you may want to push back the expiration time by five more minutes, or change
45 | it to 15 minutes. With touch, you can do that.
46 |
47 | The binary protocol also adds GAT commands (Get And Touch), which allow you to
48 | fetch an item and simultaneously update its expiration time.
49 |
50 | === Fast Connection Limit Handling ===
51 |
52 | A new option, `-o`, has appeared! With -o new, experimental, or highly
53 | specific options are given full names. The first of which is `maxconns_fast`
54 |
55 | {{{$ memcached -o maxconns_fast}}}
56 |
57 | This option changes the way the maximum connection limit is handled. By
58 | default, when memcached runs out of file descriptors, it stops listening for
59 | new connections. When this happens, connections will sit in the listen backlog
60 | (defaulting to 1024, and adjustable with the -b option). Once some connections
61 | close off, memcached will starts accepting new connections again and they will
62 | be served.
63 |
64 | This is undesireable as it can cause clients to delay or timeout for a long
65 | period of time. Long enough that it may be quicker to treat the items as a
66 | cache miss.
67 |
68 | When a client connects and memcached is configured with maxconns_fast, it
69 | writes an error to the client and immediately closes the connection. This is
70 | similar to how MySQL operates, whereas the default is similar to Apache.
71 |
72 | It is experimental as it is unknown how clients will handle this change.
73 | Please help test and report any issues to upstream client maintainers!
74 |
75 | === Internal Hash Table ===
76 |
77 | {{{
78 | STAT hash_power_level 16
79 | STAT hash_bytes 524288
80 | STAT hash_is_expanding 0
81 | }}}
82 |
83 | Now it's possible to see how much memory the hash table itself uses. This can
84 | be useful for deciding on RAM limits for very large instances.
85 |
86 | There is also a new option for setting the size of the hash table on startup:
87 |
88 | {{{$ memcached -o hashpower=20}}}
89 |
90 | If you run instances with many millions of items, and items are added very
91 | rapidly on a restart, it may be desireable to _presize_ the hash table.
92 | Normally memcached will dynamically grow the hash table as needed, and this
93 | operation is generally very low overhead. If you put decals on your '96 Mazda
94 | grapefruit shootermobile, you may like this option.
95 |
96 | Just examine the hash_power_level before restarting your instances, and adjust
97 | the startup command.
98 |
99 | === expired_unfetched, evicted_unfetched ===
100 |
101 | The two stats represent items which expired and memory was reused, and valid
102 | items which were evicted, but never touched by get/incr/append/etc operations
103 | in the meantime.
104 |
105 | Useful for seeing how many wasted items are being set and then rolling out
106 | through the bottom of the LRU's.
107 |
108 | If these counters are high, you may consider auditing what is being put into
109 | the cache.
110 |
111 | == Contributors ==
112 |
113 | The following people contributed to this release since 1.4.7.
114 |
115 | Note that this is based on who contributed changes, not how they were
116 | done. In many cases, a code snippet on the mailing list or a bug
117 | report ended up as a commit with your name on it.
118 |
119 | Note that this is just a summary of how many changes each person made
120 | which doesn't necessarily reflect how significant each change was.
121 | For details on what led up into a branch, either grab the git repo and
122 | look at the output of `git log 1.4.7..1.4.8-rc1` or use a web view.
123 |
124 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
125 | * Web View: http://github.com/memcached/memcached/commits/1.4.8-rc1
126 |
127 | {{{
128 | 11 dormando
129 | 2 Daniel Pañeda
130 | 2 Trond Norbye
131 | 1 Dustin Sallings
132 | }}}
133 |
134 |
135 | == Control ==
136 |
--------------------------------------------------------------------------------
/ReleaseNotes1411.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.11
2 |
3 | = Memcached 1.4.11 Release Notes =
4 |
5 | Date: 2012-1-16
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.11.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Memcached 1.4.11. Fixes race conditions and crashes introduced in 1.4.10. Adds
17 | the ability to rebalance and reassign slab memory.
18 |
19 | == Fixes ==
20 |
21 | * bug237: Don't compute incorrect argc for timedrun
22 | * fix 'age' stat for stats items
23 | * binary deletes were not ticking stats counters
24 | * Fix a race condition from 1.4.10 on item_remove
25 | * close some idiotic race conditions
26 | * initial slab automover
27 | * slab reassignment
28 | * clean do_item_get logic a bit. fix race.
29 | * clean up the do_item_alloc logic
30 | * shorten lock for item allocation more
31 | * Fix to build with cyrus sasl 2.1.25
32 |
33 |
34 | == New Features ==
35 |
36 | Slab page reassignment and bug fixes over 1.4.10.
37 |
38 | === Bug Fixes ===
39 |
40 | There were some race conditions and logic errors introduced in 1.4.10, they
41 | should be rare, but users are strongly encouraged to upgrade.
42 |
43 | === Slab Reassign ===
44 |
45 | Long running instances of memcached may run into an issue where all available
46 | memory has been assigned to a specific slab class (say items of roughly size
47 | 100 bytes). Later the application starts storing more of its data into a
48 | different slab class (items around 200 bytes). Memcached could not use the 100
49 | byte chunks to satisfy the 200 byte requests, and thus you would be able to
50 | store very few 200 byte items.
51 |
52 | 1.4.11 introduces the ability to reassign slab pages. This is a *beta* feature
53 | and the commands may change for the next few releases, so *please* keep this
54 | in mind. When the commands are finalized they will be noted in the release
55 | notes.
56 |
57 | Slab reassignment can only be enabled at start time:
58 |
59 | {{{$ memcached -o slab_reassign}}}
60 |
61 | Once all memory has been assigned and used by items, you may use a command to
62 | reassign memory.
63 |
64 | {{{$ echo "slabs reassign 1 4" | nc localhost 11211}}}
65 |
66 | That will return an error code indicating success, or a need to retry later.
67 | Success does not mean that the slab was moved, but that a background thread
68 | will attempt to move the memory as quickly as it can.
69 |
70 | === Slab Automove ===
71 |
72 | While slab reassign is a manual feature, there is also the start of an
73 | automatic memory reassignment algorithm.
74 |
75 | {{{$ memcached -o slab_reassign,slab_automove}}}
76 |
77 | The above enables it on startup. slab_automove requires slab_reassign first be
78 | enabled.
79 |
80 | automove itself may also be enabled or disabled at runtime:
81 |
82 | {{{$ echo "slabs automove 0" | nc localhost 11211}}}
83 |
84 | The algorithm is slow and conservative. If a slab class is seen as having the
85 | highest eviction count 3 times 10 seconds apart, it will take a page from a
86 | slab class which has had zero evictions in the last 30 seconds and move the
87 | memory.
88 |
89 | There are lots of cases where this will not be sufficient, and we invite the
90 | community to help improve upon the algorithm. Included in the source directory
91 | is `scripts/mc_slab_mover`. See perldoc for more information:
92 |
93 | {{{$ perldoc ./scripts/mc_slab_mover}}}
94 |
95 | It implements the same algorithm as built into memcached, and you may modify
96 | it to better suit your needs and improve on the script or port it to other
97 | languages. Please provide patches!
98 |
99 | === Slab Reassign Implementation ===
100 |
101 | Slab page reassignment requires some tradeoffs:
102 |
103 | * All items larger than 500k (even if they're under 730k) take 1MB of space
104 |
105 | * When memory is reassigned, all items that were in the 1MB page are evicted
106 |
107 | * When slab reassign is enabled, an extra background thread is used
108 |
109 | The first item will be improved in later releases, and is avoided if you start
110 | memcached without the -o slab_reassign option.
111 |
112 | === New Stats ===
113 |
114 | {{{
115 | STAT slab_reassign_running 0
116 | STAT slabs_moved 0
117 | }}}
118 |
119 | slab_reassign_running indicates if the slab thread is attempting to move a
120 | page. It may need to wait for some memory to free up, so it could take several
121 | seconds.
122 |
123 | slabs_moved is simply a count of how many pages have been successfully moved.
124 |
125 | == Contributors ==
126 |
127 | The following people contributed to this release since 1.4.10.
128 |
129 | Note that this is based on who contributed changes, not how they were
130 | done. In many cases, a code snippet on the mailing list or a bug
131 | report ended up as a commit with your name on it.
132 |
133 | Note that this is just a summary of how many changes each person made
134 | which doesn't necessarily reflect how significant each change was.
135 | For details on what led up into a branch, either grab the git repo and
136 | look at the output of `git log 1.4.10..1.4.11` or use a web view.
137 |
138 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
139 | * Web View: http://github.com/memcached/memcached/commits/1.4.11
140 |
141 | {{{
142 | 15 dormando
143 | 1 Dustin Sallings
144 | 1 Steve Wills
145 | }}}
146 |
147 | == Control ==
148 |
--------------------------------------------------------------------------------
/NewUserInternals.wiki:
--------------------------------------------------------------------------------
1 | #summary No Guts No Glory
2 |
3 |
4 |
5 | It is important that developers using memcached understand a little bit about how it works internally. While it can be a waste to overfocus on the bits and bytes, as your experience grows understanding the underlying bits become invaluable.
6 |
7 | Understanding memory allocation and evictions, and this particular type of LRU is most of what you need to know.
8 |
9 | == How Memory Gets Allocated For Items ==
10 |
11 | Memory assigned via the `-m` commandline argument to memcached is reserved for item data storage. The primary storage is broken up (by default) into 1 megabyte pages. Each `page` is then assigned into `slab classes` as necessary, then cut into chunks of a specific size for that `slab class`.
12 |
13 | Once a page is assigned to a class, it is *never* moved. If your access patterns end up putting 80% of your pages in class 3, there will be less memory available for class 4. The best way to think about this is that memcached is actually many smaller individaul caches. Each class has its own set of statistical counters, and its own LRU.
14 |
15 | Classes, sizes, and chunks are shown best by starting up memcached with `-vv`:
16 |
17 | {{{
18 | $ ./memcached -vv
19 | slab class 1: chunk size 80 perslab 13107
20 | slab class 2: chunk size 104 perslab 10082
21 | slab class 3: chunk size 136 perslab 7710
22 | slab class 4: chunk size 176 perslab 5957
23 | slab class 5: chunk size 224 perslab 4681
24 | slab class 6: chunk size 280 perslab 3744
25 | slab class 7: chunk size 352 perslab 2978
26 | slab class 8: chunk size 440 perslab 2383
27 | slab class 9: chunk size 552 perslab 1899
28 | slab class 10: chunk size 696 perslab 1506
29 | [...etc...]
30 | }}}
31 |
32 | In slab class 1, each chunk is 80 bytes, and each page can then contain 13,107 chunks (or items). This continues all the way up to 1 megabyte.
33 |
34 | When storing items, they are pushed into the slab class of the nearest fit. If your key + misc data + value is 50 bytes total, it will go into class 1, with an overhead loss of 30 bytes. If your data is 90 bytes total, it will go into class2, with an overhead of 14 bytes.
35 |
36 | You can adjust the slab classes with `-f` and inspect them in various ways, but those're more advanced topics for when you need them. It's best to be aware of the basics because they can bite you.
37 |
38 | == What Other Memory Is Used ==
39 |
40 | Memcached uses chunks of memory for other functions as well. There is overhead in the hash table it uses to look up your items through. Each connection uses a few small buffers as well. This shouldn't add up to more than a few % extra memory over your specified `-m` limit, but keep in mind that it's there.
41 |
42 | == When Memory Is Reclaimed ==
43 |
44 | Memory for an item is not actively reclaimed. If you store an item and it expires, it sits in the LRU cache at its position until it falls to the end and is reused.
45 |
46 | However, if you fetch an expired item, memcached will find the item, notice that it's expired, and free its memory. This gives you the common case of normal cache churn reusing its own memory.
47 |
48 | Items can also be evicted to make way for new items that need to be stored, or expired items are discovered and their memory reused.
49 |
50 | == How Much Memory Will an Item Use ==
51 |
52 | An item will use space for the full length of its key, the internal datastructure for an item, and the length of the data.
53 |
54 | You can discover how large an Item is by compiling memcached on your system, then running the "./sizes" utility which is built. On a 32bit system this may look like 32 bytes for items without CAS (server started with -C), and 40 bytes for items with CAS. 64bit systems will be a bit higher due to needing larger pointers. However you gain a lot more flexibility with the ability to put tons of ram into a 64bit box :)
55 |
56 | {{{
57 | $ ./sizes
58 | Slab Stats 56
59 | Thread stats 176
60 | Global stats 108
61 | Settings 88
62 | Item (no cas) 32
63 | Item (cas) 40
64 | Libevent thread 96
65 | Connection 320
66 | ----------------------------------------
67 | libevent thread cumulative 11472
68 | Thread stats cumulative 11376
69 | }}}
70 |
71 | == When Are Items Evicted ==
72 |
73 | Items are evicted if they have not expired (an expiration time of 0 or some time in the future), the slab class is completely out of free chunks, and there are no free pages to assign to a slab class.
74 |
75 | == How the LRU Decides What to Evict ===
76 |
77 | Memory is also reclaimed when it's time to store a new item. If there are no free chunks, and no free pages in the appropriate slab class, memcached will look at the end of the LRU for an item to "reclaim". It will search the last few items in the tail for one which has already been expired, and is thus free for reuse. If it cannot find an expired item however, it will "evict" one which has not yet expired. This is then noted in several statistical counters.
78 |
79 | == libevent + Socket Scalability ==
80 |
81 | Memcached uses [http://www.monkey.org/~provos/libevent/ libevent] for scalable sockets, allowing it to easily handle tens of thousands of connections. Each worker thread on memcached runs its own event loop and handles its own clients. They share the cache via some centralized locks, and spread out protocol processing.
82 |
83 | This scales very well. Some issues may be seen with extremely high loads (200,00+ operations per second), but if you hit any limits please let us know, as they're usually solvable :)
84 |
85 |
--------------------------------------------------------------------------------
/ReleaseNotes148.wiki:
--------------------------------------------------------------------------------
1 | #summary Release notes for Release 1.4.8
2 |
3 | = Memcached 1.4.8 Release Notes =
4 |
5 | Date: 2011-10-04
6 |
7 | == Download ==
8 |
9 | Download Link:
10 |
11 | http://memcached.googlecode.com/files/memcached-1.4.8.tar.gz
12 |
13 |
14 | == Overview ==
15 |
16 | Feature and bugfix release. New Touch commands, counters, and a change to
17 | connection limit functionality.
18 |
19 | Included is an important bugfix for binary protocol users. The binary get
20 | command was not activating the LRU algorithm. Fetching an item would not
21 | prevent it from getting expired early.
22 |
23 | == Fixes ==
24 |
25 | * Fix to write correct pid from start-memcached
26 | * Fix to enable LRU when using binary protocol
27 | * Upgrade stats items counters to 64bit
28 | * Add new stats expired_unfetched, evicted_unfetched
29 | * Allow setting initial size of the hash table
30 | * Expose stats for the internal hash table
31 | * bug220: incr would sometimes return the previous item's CAS
32 | * Fixed bug on multi get processing
33 | * Experimental maxconns_fast option
34 | * Add an ASCII touch command
35 | * Add binary GATK/GATKQ
36 | * Backport binary TOUCH/GAT/GATQ commands
37 | * Issue 221: Increment treats leading spaces as 0
38 | * Fix compile error on OS X
39 |
40 | == New Features ==
41 |
42 | === Touch Commands ===
43 |
44 | Binary Touch/GAT commands were backported from 1.6. New GATK/GATKQ commands
45 | were added for completeness. Finally, an Ascii protocol `touch` command was
46 | also added.
47 |
48 | Touch commands are used to update the expiration time of an existing item
49 | without fetching it. Say you have a counter set to expire in five minutes, but
50 | you may want to push back the expiration time by five more minutes, or change
51 | it to 15 minutes. With touch, you can do that.
52 |
53 | The binary protocol also adds GAT commands (Get And Touch), which allow you to
54 | fetch an item and simultaneously update its expiration time.
55 |
56 | === Fast Connection Limit Handling ===
57 |
58 | A new option, `-o`, has appeared! With -o new, experimental, or highly
59 | specific options are given full names. The first of which is `maxconns_fast`
60 |
61 | {{{$ memcached -o maxconns_fast}}}
62 |
63 | This option changes the way the maximum connection limit is handled. By
64 | default, when memcached runs out of file descriptors, it stops listening for
65 | new connections. When this happens, connections will sit in the listen backlog
66 | (defaulting to 1024, and adjustable with the -b option). Once some connections
67 | close off, memcached will starts accepting new connections again and they will
68 | be served.
69 |
70 | This is undesireable as it can cause clients to delay or timeout for a long
71 | period of time. Long enough that it may be quicker to treat the items as a
72 | cache miss.
73 |
74 | When a client connects and memcached is configured with maxconns_fast, it
75 | writes an error to the client and immediately closes the connection. This is
76 | similar to how MySQL operates, whereas the default is similar to Apache.
77 |
78 | It is experimental as it is unknown how clients will handle this change.
79 | Please help test and report any issues to upstream client maintainers!
80 |
81 | === Internal Hash Table ===
82 |
83 | {{{
84 | STAT hash_power_level 16
85 | STAT hash_bytes 524288
86 | STAT hash_is_expanding 0
87 | }}}
88 |
89 | Now it's possible to see how much memory the hash table itself uses. This can
90 | be useful for deciding on RAM limits for very large instances.
91 |
92 | There is also a new option for setting the size of the hash table on startup:
93 |
94 | {{{$ memcached -o hashpower=20}}}
95 |
96 | If you run instances with many millions of items, and items are added very
97 | rapidly on a restart, it may be desireable to _presize_ the hash table.
98 | Normally memcached will dynamically grow the hash table as needed, and this
99 | operation is generally very low overhead. If you put decals on your '96 Mazda
100 | grapefruit shootermobile, you may like this option.
101 |
102 | Just examine the hash_power_level before restarting your instances, and adjust
103 | the startup command.
104 |
105 | === expired_unfetched, evicted_unfetched ===
106 |
107 | The two stats represent items which expired and memory was reused, and valid
108 | items which were evicted, but never touched by get/incr/append/etc operations
109 | in the meantime.
110 |
111 | Useful for seeing how many wasted items are being set and then rolling out
112 | through the bottom of the LRU's.
113 |
114 | If these counters are high, you may consider auditing what is being put into
115 | the cache.
116 |
117 | == Contributors ==
118 |
119 | The following people contributed to this release since 1.4.7.
120 |
121 | Note that this is based on who contributed changes, not how they were
122 | done. In many cases, a code snippet on the mailing list or a bug
123 | report ended up as a commit with your name on it.
124 |
125 | Note that this is just a summary of how many changes each person made
126 | which doesn't necessarily reflect how significant each change was.
127 | For details on what led up into a branch, either grab the git repo and
128 | look at the output of `git log 1.4.7..1.4.8` or use a web view.
129 |
130 | * Repo list: http://code.google.com/p/memcached/wiki/DevelopmentRepos
131 | * Web View: http://github.com/memcached/memcached/commits/1.4.8
132 |
133 | {{{
134 | 12 dormando
135 | 2 Daniel Pañeda
136 | 2 Trond Norbye
137 | 1 Dustin Sallings
138 | 1 Nate
139 | }}}
140 |
141 |
142 | == Control ==
143 |
--------------------------------------------------------------------------------
/NewConfiguringClient.wiki:
--------------------------------------------------------------------------------
1 | #summary You Can't Get There From Here
2 |
3 |
4 |
5 | = Common Client Configurables =
6 |
7 | Most clients are similar in some important ways. They may implement some ideas differently, but they contain many common concepts to twiddle and fiddle.
8 |
9 | == Hashing ==
10 |
11 | All clients support at least one method of "hashing" keys among servers. Keep in mind that most of these defaults are not compatible with each other. If you're using the perl Cache::Memcached and expect to resolve keys to servers the same way as a PHP client, you're in for trouble.
12 |
13 | There are exceptions to this, as clients based on [http://libmemcached.org libmemcached] should all have access to the same hasing algorithms.
14 |
15 | == Consistent Hashing ==
16 |
17 | Consistent Hashing is a model that allows for more stable distribution of keys given addition or removal of servers. In a normal hashing algorithm, changing the number of servers can cause many keys to be remapped to different servers, causing huge sets of cache misses. [http://en.wikipedia.org/wiki/Consistent_hashing Consistent Hashing] describes methods for mapping keys to a list of servers, where adding or removing servers causes a very minimal shift in where keys map to.
18 |
19 | So in short, with a normal hashing function, adding an eleventh server may cause 40%+ of your keys to suddenly point to different servers than normal.
20 |
21 | However, with a consistent hashing algorithm, adding an eleventh server should cause less than 10% of your keys to be reassigned. In practice this will vary, but it certainly helps.
22 |
23 | TODO: I know there's a better discussion of this that's linkable. help find it? I can never describe it well enough.
24 |
25 | == Configuring Servers *Consistently* ==
26 |
27 | When adding servers to your configuration, pay attention that the list of servers you supply to your clients are exactly the same across the board.
28 |
29 | If you have three webservers, and each webserver is also running a memcached instance, you may think it would be clever to address the "local" instance as "localhost". This will *not* work as expected, as the servers are now different between webservers. This means webserver 1 will map keys differently than server 2, causing mass hysteria among your users and business development staff.
30 |
31 | The ordering is also important. Some clients will sort the server list you supply to them, but others will not. If you have servers "A, B, C", list them as "A, B, C" everywhere.
32 |
33 | Use Puppet/Chef/rsync/whatever is necessary to ensure these files are in sync :)
34 |
35 | == "Weighting" ==
36 |
37 | Given an imperfect world, sometimes you may have one memcached instance that has more RAM available than others. Some clients will allow you to apply more "weight" to the larger server. Others will allow you to specify one server multiple times to get it more chances of being selected.
38 |
39 | Either way, you'd probably do well to verify that the "weighting" is doing what you expect it to do.
40 |
41 | == Failure, or Failover ==
42 |
43 | What will your client do when a server is unavailable or provides an invalid response?
44 |
45 | In the dark days of memcached, the default was to always "failover", by trying the next server in the list. That way if a server crashes, its keys will get reassigned to other instances and everything moves on happily.
46 |
47 | However there're many ways to kill a machine. Sometimes they don't even like to stay dead. Given the scenario:
48 |
49 | * Sysadmin Bob walks by Server B and knocks the ethernet cable out of its port.
50 | * Server B's keys get "rerouted" to other instances.
51 | * Sysadmin Bob is an attentive (if portly) fellow and dutifully restores the ethernet cable from its parted port.
52 | * Server B's keys get "rerouted" back to itself.
53 |
54 | Now everything goes scary. Any updates you've made to your cache in the time it took Bob to realize his mistake have been lost, and old data is presented to the user. This gets even worse if:
55 |
56 | * Server B's ethernet clip was broken by Bob's folly and later falls out of its port unattended.
57 |
58 | Now your data has flipped back to yet another set. Annoying.
59 |
60 | Another erroneous client feature would actually amend the server list when a server goes out of commission, which ends up remapping far more keys than it should.
61 |
62 | Modern life encourages the use of "Failure", when possible. That is, if the server you intend to fetch or store a cache entry to is unavailable, simply proceed as though it was a cache miss. You might still flap between old and new data if you have a Server B situation, but the effects are reduced.
63 |
64 | == Compression ==
65 |
66 | Compressing large values is a great way to get more bang out of your memory buck. Compression can save a lot of memory for some values, and also potentially reduce latency as smaller values are quicker to fetch over the network.
67 |
68 | Most clients support enabling or disabling compression by threshold of item size, and some on a per-item basis. Smaller items won't necessarily benefit as much from having their data reduced, and would simply waste CPU.
69 |
70 | == Managing Connection Objects ==
71 |
72 | A common first-timer failure is that no matter what you do, you seem to run memcached flat out of connections. Your small server is allocating 50,000 connections to memcached and you have no idea what's going on.
73 |
74 | Be wary of how you manage your connection objects! If you are constantly initializing connection objects every time you wish to contact memcached, odds are good you're going to leak connections.
75 |
76 | Some clients (like PHP ones) have a less obvious approach to managing how many connections it will open. Continually calling 'addServer()' may just leak connections on you, even if you've already added a server. Read your clients' documentation to confirm what actions create connections and what will not.
77 |
--------------------------------------------------------------------------------
/NewConfiguringServer.wiki:
--------------------------------------------------------------------------------
1 | #summary Square Peg Into Round Hole
2 |
3 |
4 |
5 | = Commandline Arguments =
6 |
7 | Memcached comes equipped with basic documentation about its commandline arguments. View `memcached -h` or `man memcached` for up to date documentation. The service strives to have mostly sensible defaults.
8 |
9 | When setting up memcached for the first time, you will pay attention to `-m`, `-d`, and `-v`.
10 |
11 | `-m` tells memcached how much RAM to use for item storage (in megabytes). Note carefully that this isn't a global memory limit, so memcached will use a few % more memory than you tell it to. Set this to safe values. Setting it to less than 48 megabytes does not work properly in 1.4.x and earlier. It will still use the memory.
12 |
13 | `-d` tells memcached to daemonize. If you're running from an init script you may not be setting this. If you're using memcached for the first time, it might be educational to start the service *without* `-d` and watching it.
14 |
15 | `-v` controls verbosity to STDOUT/STDERR. Multiple `-v`'s increase verbosity. A single one prints extra startup information, and multiple will print increasingly verbose information about requests hitting memcached. If you're curious to see if a test script is doing what you expect it to, running memcached in the foreground with a few verbose switches is a good idea.
16 |
17 | Everything else comes with sensible defaults; you should alter these only if necessary.
18 |
19 | = Init Scripts =
20 |
21 | If you have installed memcached from your OS's package management system, odds are it already comes with an init script. They come with alternative methods to configure what startup options memcached receives. Such as via a /etc/sysconfig/memcached file. Make sure you check these before you run off editing init scripts or writing your own.
22 |
23 | If you're building memcached yourself, the 'scripts/' directory in the source tarball contains several examples of init scripts.
24 |
25 | = Multiple Instances =
26 |
27 | Running multiple local instances of memcached is trivial. If you're maintaining a developer environment or a localhost test cluster, simply change the port it listens on, ie: `memcached -p 11212`.
28 |
29 | There is an unmerged (as of this writing) set of example init scripts for managing multiple instances over at [http://code.google.com/p/memcached/issues/detail?id=82 bug 82]. This will likely be merged (in some fashion) for 1.4.6.
30 |
31 | = Networking =
32 |
33 | By default memcached listens on TCP and UDP ports, both 11211. `-l` allows you to bind to specific interfaces or IP addresses. Memcached does not spend much, if any, effort in ensuring its defensibility from random internet connections. So you *must not* expose memcached directly to the internet, or otherwise any untrusted users. Using SASL authentication here helps, but should not be totally trusted.
34 |
35 | == TCP ==
36 |
37 | `-p` changes where it will listen for TCP connections. When changing the port via `-p`, the port for UDP will follow suit.
38 |
39 | == UDP ==
40 |
41 | `-U` modifies the UDP port, defaulting to on. UDP is useful for fetching or setting small items, not as useful for manipulating large items. Setting this to 0 will disable it, if you're worried.
42 |
43 | == Unix Sockets ==
44 |
45 | If you wish to restrict a daemon to be accessable by a single local user, or just don't wish to expose it via networking, a unix domain socket may be used. `-s ` is the parameter you're after. If enabling this, TCP/UDP will be disabled.
46 |
47 | = Connection Limit =
48 |
49 | By default the max number of concurrent connections is set to 1024. Configuring this correctly is important. Extra connections to memcached may hang while waiting for slots to free up. You may detect if your instance has been running out of connections by issuing a `stats` command and looking at "listen_disabled_num". That value should be zero or close to zero.
50 |
51 | Memcached can scale with a large number of connections very simply. The amount of memory overhead per connection is low (even lower if the connection is idle), so don't sweat setting it very high.
52 |
53 | Lets say you have 5 webservers, each running apache. Each apache process has a MaxClients setting of 12. This means that the maximum number of concurrent connections you may receive is 5 x 12 (60). Always leave a few extra slots open if you can, for administrative tasks, adding more webservers, crons/scripts/etc.
54 |
55 | = Threading =
56 |
57 | Threading is used to scale memcached across CPU's. Its model is by "worker threads", meaning that each thread makes itself available to process as much as possible. Since using libevent allows good scalability with concurrent connections, each thread is able to handle many clients.
58 |
59 | This is different from some webservers, such as apache, which use one process or one thread per active client connection. Since memcached is highly efficient, low numbers of threads are fine. In webserver land, it means it's more like nginx than apache.
60 |
61 | By default 4 threads are allocated. Unless you are running memcached extremely hard, you should not set this number to be any higher. Setting it to very large values (80+) will not make it run any faster.
62 |
63 | = Inspecting Running Configuration =
64 |
65 | {{{
66 | $ echo "stats settings" | nc localhost 11211
67 | STAT maxbytes 67108864
68 | STAT maxconns 1024
69 | STAT tcpport 11211
70 | STAT udpport 11211
71 | STAT inter NULL
72 | STAT verbosity 0
73 | STAT oldest 0
74 | STAT evictions on
75 | STAT domain_socket NULL
76 | STAT umask 700
77 | STAT growth_factor 1.25
78 | STAT chunk_size 48
79 | STAT num_threads 4
80 | STAT stat_key_prefix :
81 | STAT detail_enabled no
82 | STAT reqs_per_event 20
83 | STAT cas_enabled yes
84 | STAT tcp_backlog 1024
85 | STAT binding_protocol auto-negotiate
86 | STAT auth_enabled_sasl no
87 | STAT item_size_max 1048576
88 | END
89 | }}}
90 |
91 | cool huh? Between 'stats' and 'stats settings', you can double check that what you're telling memcached to do is what it's actually trying to do.
92 |
--------------------------------------------------------------------------------
/NewOverview.wiki:
--------------------------------------------------------------------------------
1 | #summary what is this thing?
2 |
3 |
4 |
5 | = Memcached =
6 |
7 | *Free & open source, high-performance, distributed memory object caching system*, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.
8 |
9 | Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.
10 |
11 | *Memcached is simple yet powerful*. Its simple design promotes quick deployment, ease of development, and solves many problems facing large data caches. Its API is available for most popular languages.
12 |
13 | At heart it is a simple Key/Value store.
14 |
15 | = Memcached In Slightly More Words =
16 |
17 | See the [http://memcached.org/about memcached.org about page] for a brief overview.
18 |
19 | == I Turned It On and My App Isn't Faster!!! ==
20 |
21 | Memcached is a developer tool, not a "code accelerator", nor is it database middleware. If you're trying to set up an application you have downloaded or purchased to use memcached, your best bet is to head back their way and read your app's documentation on how to utilize memcached. Odds are these documents will not help you much.
22 |
23 | == What is it Made Up Of? ==
24 |
25 | * Client software, which is given a list of available memcached servers.
26 | * A client-based hashing algorithm, which chooses a server based on the "key" input.
27 | * Server software, which stores your values with their keys into an internal hash table.
28 | * Server algorithms, which determine when to throw out old data (if out of memory), or reuse memory.
29 |
30 | == What are the Design Philosophies? ==
31 |
32 | === Simple Key/Value Store ===
33 |
34 | The server does not care what your data looks like. Items are made up of a key, an expiration time, optional flags, and raw data. It does not understand data structures; you must upload data that is pre-serialized. Some commands (incr/decr) may operate on the underlying data, but the implementation is simplistic.
35 |
36 | === Smarts Half in Client, Half in Server ===
37 |
38 | A "memcached implementation" is implemented partially in a client, and partially in a server. Clients understand how to send items to particular servers, what to do when it cannot contact a server, and how to fetch keys from the servers.
39 |
40 | The servers understand how to receive items, and how to expire them.
41 |
42 | === Servers are Disconnected From Each Other ===
43 |
44 | Memcached servers are generally unaware of each other. There is no crosstalk, no syncronization, no broadcasting. The lack of interconnections means adding more servers will usually add more capacity as you expect. There might be exceptions to this rule, but they are exceptions and carefully regarded.
45 |
46 | === O(1) Everything ===
47 |
48 | For everything it can, memcached commands are O(1). Each command takes roughly the same amount of time to process every time, and should not get noticably slower anywhere. This goes back to the "Simple K/V Store" principle, as you don't want to be processing data in the cache service your tens or hundreds or thousands of webservers may need to access at the same time.
49 |
50 | === Forgetting Data is a Feature ===
51 |
52 | Memcached is, by default, a Least Recently Used cache. It is designed to have items expire after a specified amount of time. Both of these are elegant solutions to many problems; Expire items after a minute to limit stale data being returned, or flush unused data in an effort to retain frequently requested information.
53 |
54 | This further allows great simplification in how memcached works. No "pauses" waiting for a garbage collector ensures low latency, and free space is lazily reclaimed.
55 |
56 | === Cache Invalidation is a Hard Problem ===
57 |
58 | Given memcached's centralized-as-a-cluster nature, the job of invalidating a cache entry is trivial. Instead of broadcasting data to all available hosts, clients direct in on the exact location of data to be invalidated. You may further complicate matters to your needs, and there are caveats, but you sit on a strong baseline.
59 |
60 | == How Is Memcached Evolving? ==
61 |
62 | Memcached has been evolving as a platform. Some of this is due to the slow development nature, and the many clients. Mostly it happens as people explore the possibilities of K/V stores. Learning to cache SQL queries and rendered templates used to keep developers occupied, but they thirst for more.
63 |
64 | === The Protocol ===
65 |
66 | Memcached is not the only implementation of the protocol. There are commercial entities, other OSS projects, and so on. Memcached clients are fairly common, and there are many uses for a memcached-like cluster layout. We will continue to see other projects "speak" memcached, which in turn influences memcached as a culture and as software itself.
67 |
68 | === Other Protocols ===
69 |
70 | The industry is experimenting with many different ways to communicate with networked services. Google protocol buffers, Facebook Thrift, Avro, etc. There might be better ways to do this, as the future will show.
71 |
72 | === Persistent Storage ===
73 |
74 | Many users want K/V stores that are able to persist values beyond a restart, or beyond available physical memory. In many cases memcached is not a great fit here; expensive flash memory is needed to keep data access performant. In most common scenarios, disaster can ensue if a server is unavailable and later comes up with old data. Users see stale data, features break, etc.
75 |
76 | However, with the price of SSD's dropping, this is likely an area to be expanded into.
77 |
78 | === Storage Engines ===
79 |
80 | Storage Engines in general are an important future to memcached as a service. Aside from our venerable slabbing algorithm, there are other memory backends to experiment with. tcmalloc, jemalloc, CPU-local slabs, hierarchies, etc. Storage engines foster experimentation to improve speed and memory efficiency, as well as specialized services able to speak the memcached protocol.
81 |
82 |
--------------------------------------------------------------------------------
/TutorialCachingStory.wiki:
--------------------------------------------------------------------------------
1 | #summary This is a story of Caching
2 |
3 | _ed note: this is an overview of basic memcached use case, and how memcached clients work_
4 |
5 | Two plucky adventurers, Programmer and Sysadmin, set out on a journey. Together they make websites. Websites with webservers and databases. Users from all over the Internet talk to the webservers and ask them to make pages for them. The webservers ask the databases for junk they need to make the pages. Programmer codes, Sysadmin adds webservers and database servers.
6 |
7 | One day the Sysadmin realizes that their database is sick! It's spewing bile and red stuff all over! Sysadmin declares it has a fever, a load average of 20! Programmer asks Sysadmin, "well, what can we do?" Sysadmin says, "I heard about this great thing called memcached. It really helped livejournal!" "Okay, let's try it!" says the Programmer.
8 |
9 | Our plucky Sysadmin eyes his webservers, of which he has six. He decides to use three of them to run the 'memcached' server. Sysadmin adds a gigabyte of ram to each webserver, and starts up memcached with a limit of 1 gigabyte each. So he has three memcached instances, each can hold up to 1 gigabyte of data. So the Programmer and the Sysadmin step back and behold their glorious memcached!
10 |
11 | "So now what?" they say, "it's not DOING anything!" The memcacheds aren't talking to anything and they certainly don't have any data. And NOW their database has a load of 25!
12 |
13 | Our adventurous Programmer grabs the pecl/memcache client library manual, which the plucky Sysadmin has helpfully installed on all SIX webservers. "Never fear!" he says. "I've got an idea!" He takes the IP addresses and port numbers of the THREE memcacheds and adds them to an array in php.
14 |
15 | {{{
16 | $MEMCACHE_SERVERS = array(
17 | "10.1.1.1", //web1
18 | "10.1.1.2", //web2
19 | "10.1.1.3", //web3
20 | );
21 | }}}
22 |
23 | Then he makes an object, which he cleverly calls '$memcache'.
24 |
25 | {{{
26 | $memcache = new Memcache();
27 | foreach($MEMCACHE_SERVERS as $server){
28 | $memcache->addServer ( $server );
29 | }
30 | }}}
31 |
32 | Now Programmer thinks. He thinks and thinks and thinks. "I know!" he says. "There's this thing on the front page that runs `SELECT * FROM hugetable WHERE timestamp > lastweek ORDER BY timestamp ASC LIMIT 50000;` and it takes five seconds!" "Let's put it in memcached," he says. So he wraps his code for the SELECT and uses his $memcache object. His code asks:
33 |
34 | Are the results of this select in memcache?
35 | If not, run the query, take the results, and PUT it in memcache!
36 | Like so:
37 |
38 | {{{
39 | $huge_data_for_front_page = $memcache->get("huge_data_for_front_page");
40 | if($huge_data_for_front_page === false){
41 | $huge_data_for_front_page = array();
42 | $sql = "SELECT * FROM hugetable WHERE timestamp > lastweek ORDER BY timestamp ASC LIMIT 50000";
43 | $res = mysql_query($sql, $mysql_connection);
44 | while($rec = mysql_fetch_assoc($res)){
45 | $huge_data_for_front_page[] = $rec;
46 | }
47 | // cache for 10 minutes
48 | $memcache->set("huge_data_for_front_page", $huge_data_for_front_page, 0, 600);
49 | }
50 |
51 | // use $huge_data_for_front_page how you please
52 | }}}
53 |
54 | Programmer pushes code. Sysadmin sweats. BAM! DB load is down to 10! The website is pretty fast now. So now, the Sysadmin puzzles, "What the HELL just happened!?" "I put graphs on my memcacheds! I used cacti, and this is what I see! I see traffic to one memcached, but I made three :(." So, the Sysadmin quickly learns the ascii protocol and telnets to port 11211 on each memcached and asks it:
55 |
56 | Hey, 'get huge_data_for_front_page' are you there?
57 |
58 | The first memcached does not answer...
59 |
60 | The second memcached does not answer...
61 |
62 | The third memcached, however, spits back a huge glob of crap into his telnet session! There's the data! Only once memcached has the key that the Programmer cached!
63 |
64 | Puzzled, he asks on the mailing list. They all respond in unison, "It's a distributed cache! That's what it does!" But what does that mean? Still confused, and a little scared for his life, the Sysadmin asks the Programmer to cache a few more things. "Let's see what happens. We're curious folk. We can figure this one out," says the Sysadmin.
65 |
66 | "Well, there is another query that is not slow, but is run 100 times per second. Maybe that would help," says the Programmer. So he wraps that up like he did before. Sure enough, the server loads drops to 8!
67 |
68 | So the Programmer codes more and more things get cached. He uses new techniques. "I found them on the list and the faq! What nice blokes," he says. The DB load drops; 7, 5, 3, 2, 1!
69 |
70 | "Okay," says the Sysadmin, "let's try again." Now he looks at the graphs. ALL of the memcacheds are running! All of them are getting requests! This is great! They're all used!
71 |
72 | So again, he takes keys that the Programmer uses and looks for them on his memcached servers. 'get this_key' 'get that_key' But each time he does this, he only finds each key on one memcached! Now WHY would you do this, he thinks? And he puzzles all night. That's silly! Don't you want the keys to be on all memcacheds?
73 |
74 | "But wait", he thinks "I gave each memcached 1 gigabyte of memory, and that means, in total, I can cache three gigabytes of my database, instead of just ONE! Oh man, this is great," he thinks. "This'll save me a ton of cash. Brad Fitzpatrick, I love your ass!"
75 |
76 | "But hmm, the next problem, and this one's a puzzler, this webserver right here, this one runing memcached it's old, it's sick and needs to be upgraded. But in order to do that I have to take it offline! What will happen to my poor memcache cluster? Eh, let's find out," he says, and he shuts down the box. Now he looks at his graphs. "Oh noes, the DB load, it's gone up in stride! The load isn't one, it's now two. Hmm, but still tolerable. All of the other memcacheds are still getting traffic. This ain't so bad. Just a few cache misses, and I'm almost done with my work. So he turns the machine back on, and puts memcached back to work. After a few minutes, the DB load drops again back down to 1, where it should always be.
77 |
78 | "The cache restored itself! I get it now. If it's not available it just means a few of my requests get missed. But it's not enough to kill me. That's pretty sweet."
79 |
80 | So, the Programmer and Sysadmin continue to build websites. They continue to cache. When they have questions, they ask the mailing list or read the faq again. They watch their graphs. And all live happily ever after.
81 |
82 | Author: Dormando via IRC. Edited by Brian Moon for fun. Further fun editing by Emufarmers.
83 |
84 | This story has been illustrated by the online comic [http://toblender.com/tag/memcached/ TOBlender.com].
85 |
86 | Chinese [http://code.google.com/p/memcached/wiki/TutorialCachingStory translation] by Wei Liu.
87 |
--------------------------------------------------------------------------------
/Timeouts.wiki:
--------------------------------------------------------------------------------
1 | #summary Where'd I go
2 |
3 |
4 |
5 | = Troubleshooting Timeouts =
6 |
7 | Client complaining about "timeout errors", but not sure how to track it down?
8 | Here's a simple utility for examining your situation.
9 |
10 | == First, check listen_disabled_num ==
11 |
12 | Before you go ahead with troubleshooting, you'll want to telnet to your
13 | memcached instance and run `stats`, then look for "listen_disabled_num". This
14 | is a poorly named counter which describes how many times you've reached
15 | maxconns. Each time memcached hits maxconns it will delay new connections,
16 | which means you'll possibly get timeouts.
17 |
18 | Also, disable or tune any firewalls you may have in the way.
19 |
20 | == Then, carefully check the usual suspects ==
21 |
22 | Is the machine in swap? You will see random lag bubbles if your OS is swapping
23 | memcached to disk periodically.
24 |
25 | Is the machine overloaded? 0% CPU idle with a load of 400 and memcached
26 | probably isn't getting enough CPU time. You can try `nice` or `renice`, or
27 | just run less on the machine. If you're severely overloaded on CPU, you might
28 | notice the mc_conn_tester below reporting very high wait times for `set`
29 | commands.
30 |
31 | Is the memcached server 32bit? 32bit hosts have less memory available to the
32 | kernel for TCP sockets and friends. We've observed some odd behavior under
33 | large numbers of open sockets and high load with 32bit systems. Strongly
34 | consider going 64bit, as it may help some hard to trace problems go away,
35 | including segfaults due to the 2/4g memory limit.
36 |
37 | == Next, mc_conn_tester.pl ==
38 |
39 | Fetch this:
40 |
41 | http://www.memcached.org/files/mc_conn_tester.pl
42 |
43 | {{{
44 | $ ./mc_conn_tester.pl -s memcached-host -p 11211 -c 1000 --timeout 1
45 | Averages: (conn: 0.00081381) (set: 0.00001603) (get: 0.00040122)
46 | $ ./mc_conn_tester.pl --help
47 | Usage:
48 | mc_conn_tester.pl [options]
49 |
50 | Options:
51 | -s --server hostname
52 | Connect to an alternate hostname.
53 | [...etc...]
54 | }}}
55 |
56 | This is a minimal utility for testing a quick routine with a memcached
57 | instance. It will connect, attempt a couple sets, attempt a few gets, then loop and
58 | repeat.
59 |
60 | The utility does not use any memcached client and instead does minimal, raw
61 | commands with the ASCII protocol. Thus helping to rule out client bugs.
62 |
63 | If it reaches a timeout, you can see how far along in the cycle it was:
64 |
65 | {{{
66 | Fail: (timeout: 1) (elapsed: 1.00427794) (conn: 0.00000000) (set: 0.00000000) (get: 0.00000000)
67 | Fail: (timeout: 1) (elapsed: 1.00133896) (conn: 0.00000000) (set: 0.00000000) (get: 0.00000000)
68 | Fail: (timeout: 1) (elapsed: 1.00135303) (conn: 0.00000000) (set: 0.00000000) (get: 0.00000000)
69 | Fail: (timeout: 1) (elapsed: 1.00145602) (conn: 0.00000000) (set: 0.00000000) (get: 0.00000000)
70 | }}}
71 |
72 | In the above line, it has a total elapsed time of the test, and then the times
73 | at which each sub-test succeeded. In the above scanario it wasn't able to
74 | connect to memcached, so all tests failed.
75 |
76 | {{{
77 | Fail: (timeout: 1) (elapsed: 0.00121498) (conn: 0.00114512) (set: 1.00002694) (get: 0.00000000)
78 | Fail: (timeout: 1) (elapsed: 0.00368810) (conn: 0.00360799) (set: 1.00003314) (get: 0.00000000)
79 | Fail: (timeout: 1) (elapsed: 0.00128603) (conn: 0.00117397) (set: 1.00004005) (get: 0.00000000)
80 | Fail: (timeout: 1) (elapsed: 0.00115108) (conn: 0.00108099) (set: 1.00002789) (get: 0.00000000)
81 | }}}
82 |
83 | In this case, it failed waiting for "get" to complete.
84 |
85 | If you want to log all of the tests mc_conn_tester.pl runs, open the file and
86 | change the line:
87 |
88 | {{{
89 | my $debug = 0;
90 | }}}
91 |
92 | to
93 |
94 | {{{
95 | my $debug = 1;
96 | }}}
97 |
98 | You will then see normal lines begin with `loop:` and failed tests will start
99 | with `Fail:` as usual.
100 |
101 | == You're probably dropping packets. ==
102 |
103 | In most cases, where listen_disabled_num doesn't apply, you're likely dropping
104 | packets for some reason. Either a firewall is in the way and has run out of
105 | stateful tracking slots, or your network card or switch is dropping packets.
106 |
107 | You'll most likely see this manifest as:
108 |
109 | {{{
110 | Fail: (timeout: 1) (elapsed: 1.00145602) (conn: 0.00000000) (set: 0.00000000) (get: 0.00000000)
111 | }}}
112 |
113 | ... where `conn:` and the rest are all zero. So the test was not able to
114 | connect to memcached.
115 |
116 | On most systems SYN retries are 3 seconds, which is awfully long. Losing a
117 | single SYN packet will certainly mean a timeout. This is easily proven:
118 |
119 | {{{
120 | $ ./mc_conn_tester.pl -s memcached-host -c 5000 --timeout 1 > log_one_second
121 | && ./mc_conn_tester.pl -s memcached-host -c 5000 --timeout 4 > log_three_seconds
122 | && ./mc_conn_tester.pl -s memcached-host -c 5000 --timeout 8 > log_eight_seconds
123 | }}}
124 |
125 | ... Run 5000 tests each round (you can adjust this if you wish). The first one
126 | having a timeout of 1s, which is often the client default. Then next with 4s,
127 | which would allow for one SYN packet to be lost but still pass the test. Then
128 | finally 8s, which allows two SYN packets to be lost in a row and yet still
129 | succeed.
130 |
131 | If you see the number of `Fail:` lines in each log file *decrease*, then your
132 | network is likely dropping SYN packets.
133 |
134 | Fixing that, however, is beyond the scope of this document.
135 |
136 | == TIME_WAIT buckets or local port exhaustion ==
137 |
138 | If mc_conn_tester.pl is seeing connection timeouts (conn: is 0), you may be
139 | running out of local ports, firewall states, or TIME_WAIT buckets. This can
140 | happen if you are opening and closing connections quicker than the sockets can
141 | die off.
142 |
143 | Use netstat to see how many you have open, and if the number is high enough
144 | that it may be problematic. `netstat -n | grep -c TIME_WAIT`.
145 |
146 | Details of how to tune these variables are outside the scope of this document,
147 | but google for "Linux TCP network tuning TIME_WAIT" (or whatever OS you have) will
148 | usually give you good results. Look for the variables below and understand
149 | their meaning before tuning.
150 |
151 | {{{
152 | !THESE ARE EXAMPLES, NOT RECOMMENDED VALUES!
153 | net.ipv4.ip_local_port_range = 16384 65534
154 | net.ipv4.tcp_max_tw_buckets = 262144
155 | net.ipv4.tcp_fin_timeout = 30
156 | net.ipv4.tcp_tw_reuse = 1
157 | net.ipv4.tcp_tw_recycle = 1
158 | }}}
159 |
160 | Also read up on iptables and look up information on managing conntrack states
161 | or conntrack buckets. If you find some links you love e-mail us and we'll link
162 | them here.
163 |
164 | == But your utility never fails! ==
165 |
166 | Odds are good your client has a bug :( Try reaching out to the client author
167 | for help.
168 |
--------------------------------------------------------------------------------
/ReleaseNotes132.wiki:
--------------------------------------------------------------------------------
1 | #summary Memcached 1.3.2 (beta) Release Notes
2 |
3 | = Memcached 1.3 Beta 2 Release Notes =
4 |
5 | Date: 2009-03-11 Wed
6 |
7 |
8 | == New Features ==
9 |
10 | === Binary Protocol ===
11 |
12 | A new feature that brings new features. We now have goodness like
13 | CAS-everywhere (e.g. delete), silent, but verifiable mutation
14 | commands, and many other wonders.
15 |
16 | Note that the original protocol is *not* deprecated. It will be
17 | supported indefinitely, although some new features may only be
18 | available in the binary protocol.
19 |
20 | ==== Client Availability ===
21 |
22 | Many clients for the binary protocol are available.
23 |
24 | * C
25 |
26 | libmemcached supports just about anything you can do with a memcached
27 | protocol and is the foundation for many clients in many different
28 | languages (which you can find linked from the project page).
29 |
30 | Project page: http://tangent.org/552/libmemcached.html
31 |
32 | * Java
33 |
34 | spymemcached has very good text and binary protocol support over IPv4
35 | and IPv6 with a quite comprehensive test suite.
36 |
37 | Project page: http://code.google.com/p/spymemcached/
38 |
39 | * Protocol Spec
40 |
41 | NIH problem? Go write your own client. :)
42 |
43 | http://cloud.github.com/downloads/dustin/memcached/protocol-binary.txt
44 |
45 |
46 | == Performance ==
47 |
48 | Lots of effort has gone into increasing performance.
49 |
50 | There is no longer a build-time distinction between a single-threaded
51 | and multi-threaded memcached. If you want a single-threaded
52 | memcached, ask for one thread (though there'll still be utility
53 | threads and other such things in the background). This change lets us
54 | focus on a future where multiple cores can be saturated for servicing
55 | requests.
56 |
57 | Facebook-inspired contention reduction with per-thread stat collection
58 | and the Facebook connection dispatch and thread starvation prevention
59 | contributions helped our scalability.
60 |
61 | Lock analysis also showed us that we had quite a bit of contention on
62 | hash table expansion which has been moved into its own thread, greatly
63 | improving the scalability on multicore hardware.
64 |
65 | A variety of smaller things also shook out of performance testing and
66 | analysis.
67 |
68 | There's also a memory optimization for users who don't actually make
69 | use of CAS. Running memcached with -C disables the use of CAS
70 | resulting in a savings of about eight bytes per item. If you have big
71 | caches, and don't use CAS, this can lead to a considerable savings.
72 |
73 | == Stats ==
74 |
75 | There are several new stats and some new ways to look at older stats.
76 |
77 | === New Stats ===
78 |
79 | * Delete
80 |
81 | The global stats now contain statistics on deletion.
82 |
83 | delete_hits refers to the number of times a deletion command was
84 | issued which resulted in a modification of the cache while
85 | delete_misses refers to the number of times a deletion command was
86 | issued which had no effect due to a key mismatch.
87 |
88 | * Incr/Decr
89 |
90 | Incr and decr each have a pair of stats showing when a
91 | successful/unsuccessful incr occurred. incr_hits, incr_misses,
92 | decr_hits, and decr_misses show where such mutations worked and where
93 | they failed to find an existing object to mutate.
94 |
95 | * CAS
96 |
97 | CAS stats are tracked in three different ways:
98 |
99 | + cas_hits
100 |
101 | Number of attempts to CAS in a new value that worked.
102 |
103 | + cas_misses
104 |
105 | Number of attempts to CAS in a value where the key was not found.
106 |
107 | + cas_badval
108 |
109 | Number of attempts to CAS in a value where the CAS failed due to the
110 | object changing between the gets and the update.
111 |
112 | * slab class evicted time
113 |
114 | Per slab class, you can now see how recently accessed the most recent
115 | evicted data was. This is a useful gauge to determine eviction
116 | velocity on a slab so you can know whether evictions are healthy or if
117 | you've got a problem.
118 |
119 |
120 | === More Granular Stats ===
121 |
122 | Where possible, stats are now tracked individually by slab class. The
123 | following stats are available on a per-slab-class basis (via "stats slabs"):
124 |
125 | * get_hits
126 | * cmd_set
127 | * delete_hits
128 | * incr_hits
129 | * decr_hits
130 | * cas_hits
131 | * cas_badval
132 |
133 | (misses are obviously not available as they refer to a non-existent item)
134 |
135 | === Removed stats ===
136 |
137 | "stats malloc" and "stats maps" have been removed.
138 |
139 | If you depended on these commands for anything, please let us know so
140 | we can bring them back in a more maintainable way.
141 |
142 | == Bug Fixes ==
143 |
144 | * Build fixes on ubuntu (gcc and icc) and FreeBSD
145 | * bad interaction with cas + incr (bug 15)
146 | * setuid failures are reported properly at daemonization time
147 | * decr overflow causing unnecessary truncation to 0 (bug 21)
148 | * failure to bind on Linux with no network (i.e. laptop dev)
149 | * some memcached-tool cleanup
150 |
151 | == Development Info ==
152 |
153 | We've added a bunch of tests and new code coverage reports.
154 |
155 | All included code in this release has been tested against the
156 | following platforms (using the in-tree test suite):
157 |
158 | * ubuntu 8.10 (64-bit, both gcc and icc)
159 | * ubuntu 8.04 (32-bit)
160 | * OS X 10.5 (ppc and intel)
161 | * OpenSolaris 5.11 x86 (with and without dtrace)
162 | * FreeBSD 7 x86
163 |
164 | == Feedback ==
165 |
166 | Please try this version. Make it suffer. Report feedback to the list
167 | or file bugs as you find them.
168 |
169 | * Mailing List: [http://groups.google.com/group/memcached]
170 | * Issue Tracker: [http://code.google.com/p/memcached/issues/list]
171 | * IRC: #memcached on freenode
172 |
173 | == Contributors ==
174 |
175 | The following people contributed to this release since 1.2.6.
176 |
177 | Note that this is based on who contributed changes, not how they were
178 | done. In many cases, a code snippet on the mailing list or a bug
179 | report ended up as a commit with your name on it.
180 |
181 | Note that this is just a summary of how many changes each person made
182 | which doesn't necessarily reflect how significant each change was.
183 | For details on what led up into a branch, either grab the git repo and
184 | look at the output of `git log 1.2.6..1.3.2` or use a web view.
185 |
186 | * Repo list: [http://code.google.com/p/memcached/wiki/DevelopmentRepos]
187 | * Web View: [http://github.com/dustin/memcached/commits/1.3.2]
188 |
189 | {{{
190 | 104 Dustin Sallings
191 | 49 Trond Norbye
192 | 32 Toru Maesaka
193 | 31 dormando
194 | 8 Steve Yen
195 | 7 hachi
196 | 6 Aaron Stone
197 | 6 Brian Aker
198 | 4 Victor Kirkebo
199 | 2 Ricky Zhou
200 | 1 Jonathan Bastien-Filiatrault
201 | 1 Evan Klitzke
202 | 1 Eric Lambert
203 | }}}
--------------------------------------------------------------------------------