├── ATTACHMENTS ├── Applications │ ├── Autograder │ │ ├── autograder-large.png │ │ └── autograder.png │ ├── Seobinggo │ │ └── Seobinggo.tar.gz │ └── Viewpoints │ │ └── viewpoints.zip ├── Architecture │ ├── nm_repy_arch.png │ └── nm_repy_arch.xml ├── Archive │ ├── MeasureTwiceCutOnce │ │ └── memTest.py │ ├── MobileCeNotes │ │ ├── status.txt │ │ └── test.output │ └── SleightOfHand │ │ ├── NAT%20Doc.pdf │ │ └── NAT_traversal.jpg ├── CollectedNodeData │ ├── ContactableVsAdvertising │ │ └── contactablevsadvertising.jpg │ ├── CumulativeTimeOnline │ │ └── cumulativetimeonline.jpg │ ├── CumulativeUptime │ │ └── cumulativeuptime.jpg │ ├── DelaySpace │ │ ├── allpairsping.png │ │ ├── conquer.png │ │ ├── controller.repy │ │ ├── first_latencymap.csv │ │ ├── first_measuremap.csv │ │ ├── latencymap.csv │ │ ├── measuremap.csv │ │ ├── measurements.png │ │ ├── node.repy │ │ ├── second_latencymap.csv │ │ └── third_latencymap.csv │ ├── MobileNodes │ │ └── mobilenodes.jpg │ ├── NodesOnline │ │ └── nodesonline.jpg │ └── UnstableNodes │ │ └── unstablenodes.jpg ├── ContainmentInSeattle │ └── ContainmentInSeattle.pdf ├── CustomInstallerBuilder │ ├── cib-update.png │ ├── cib.png │ └── customized_installer_builder.jpg ├── EducationalAssignments │ ├── LinkState │ │ └── build_connectivity_map.py │ ├── PermissionsPartOne │ │ └── repyv2_commit_3499642.zip │ ├── PrivateWritePartOne │ │ └── repyv2_commit_3499642.zip │ ├── SecureTuringCompleteSandboxAttack │ │ ├── CandidateSandboxes.txt │ │ ├── a-sandbox.py │ │ ├── easytocode.py │ │ └── potentiallyhackablesandbox.py │ ├── SecurityLayerPartOne │ │ └── repyv2_commit_3499642.zip │ ├── TakeHome │ │ └── Non-transitive%20connectivity%20image.jpg │ └── WebServer │ │ └── thewall.html ├── GEC10 │ ├── repy_www_sandbox.0.1.1.py │ └── restrictions.default ├── Libraries │ ├── ExperimentLibrary │ │ └── seattlegeni_advertisement.publickey │ └── Tcup │ │ ├── tcp.repy │ │ ├── tcp.tgz │ │ └── tcup.jpg ├── Lind │ ├── LindDesignDocument%20%281%29.png │ └── LindDesignDocument.png ├── Local │ ├── RepoAccess │ │ ├── branches_sm.png │ │ ├── fork.png │ │ ├── fork_sm.png │ │ ├── pull-request.png │ │ └── pull-request_sm.png │ └── VersionDeployment │ │ ├── blackbox-push-update-output.txt │ │ ├── blackbox-rebuild-base-installers-output.txt │ │ └── blackbox-version-push-svn-diff.txt ├── NatNodes │ └── NatNodes.jpg ├── NetworkApiSemantics │ └── socketstate.jpg ├── PerformanceIsolation │ ├── linux.zip │ ├── macos.zip │ └── windows.zip ├── ProgrammersPage │ └── repy.lang ├── ProtoGeniIntegration │ └── protogeni_integration.tgz ├── PythonTutorial │ └── restrictions.test ├── RepoConfigurationForWindows │ ├── Create%20key.png │ ├── WinSCP%20main%20dialog.png │ ├── putty%20-%20Accept%20key.png │ ├── puttygen%20-%20add%20comment.png │ ├── puttygen%20-%20no%20passphrase.png │ ├── winscp%20-%20Create%20Folder.png │ ├── winscp%20-%20connect%20dialog.png │ └── winscp%20-%20warning%20dialog.png ├── RepyMapReduce │ └── stage3.PNG ├── RepyTutorial │ ├── SetTimer.jpg │ ├── clearinghouseport.png │ ├── example.zip │ └── geniport.jpg ├── RepyV2Tutorial │ ├── clearinghouseport.png │ └── example.zip ├── SeattleBackend │ └── Node%20States.svg ├── SeattleGeniDesign │ ├── seattlegeni_backend.gif │ └── seattlegeni_website.gif ├── SeattleOnAndroid │ ├── seattle-on-android.jpg │ └── test_script.repy ├── TryRepy │ └── try-repy.png ├── UnderstandingSeattle │ └── SeattleComponents │ │ ├── SeattlePicture1.jpg │ │ └── componentdiagram.gif ├── UsingSensors │ └── SimpleXmlRpcServer.java └── huxiang │ ├── huxiang_webserver.tar.gz │ └── huxiang_webserver_debug.tar.gz ├── Applications ├── Autograder.md ├── Autograder │ └── Database.md ├── GEC10.md ├── GeoIpServer.md ├── RepyMapReduce.md ├── Seobinggo.md ├── TryRepy.md ├── Viewpoints.md └── huxiang.md ├── Archive ├── AutograderCodeSprint.md ├── BotMaster.md ├── ChainloadableModules.md ├── ClearinghouseInstallationWithDjango13.md ├── CollectedNodeData.md ├── CollectedNodeData │ ├── ContactableVsAdvertising.md │ ├── CumulativeTimeOnline.md │ ├── CumulativeUptime.md │ ├── DelaySpace.md │ ├── MobileNodes.md │ ├── NodesOnline.md │ └── UnstableNodes.md ├── CustominstallerBuilderDeploymentWithModPython.md ├── CustominstallerBuilderInstallation.md ├── CustominstallerBuilderTesting.md ├── EyeCandy.md ├── Indoctrination.md ├── InfectionAndRecurrence.md ├── Libraries │ ├── ExperimentLibrary.md │ ├── Overlord.md │ ├── StatisticsLibrary.md │ └── Tcup.md ├── LipstickOnAPig.md ├── LipstickOnAPigExceptionHierarchy.md ├── Lisping.md ├── Local │ ├── Backups.md │ ├── CentralizedAdvertiseService.md │ ├── ContinuousBuild.md │ ├── ContributorAccountManagement.md │ ├── ContributorContactInfo.md │ ├── MonitorProcessService.md │ ├── RepoAccess.md │ ├── RunningIntegrationTests.md │ ├── SshService.md │ ├── SslRenewal.md │ ├── SvnService.md │ ├── TracService.md │ ├── VersionDeployment.md │ └── WikiFormatting.md ├── ManagingSprints.md ├── MeasureTwiceCutOnce.md ├── MicroMachines.md ├── MobileCeNotes.md ├── PotentialSeattleLibs.md ├── ProjectNames.md ├── ProtoGeniIntegration.md ├── ResearchAdvice.md ├── SeattleGeniInstallation.md ├── SeattleGeniProductionHttp.md ├── SleightOfHand.md ├── Speciesism.md ├── Spring2009Tasks.md ├── StrikeForce.md ├── TOOLS │ ├── extract_wiki_contents.py │ └── trac_to_git.py ├── TopSecret.md ├── TwoPlusTwo.md └── Venues.md ├── Contributing ├── BuildInstructions.md ├── ContinuousIntegration.md ├── Contributors.md ├── ContributorsPage.md ├── IdeasList.md ├── README.md ├── SubmittingAPatch.md ├── UnitTestFramework.md ├── UnitTestFrameworkRunning.md └── WebCodingStyle.md ├── EducationalAssignments ├── ABStoragePartOne.md ├── ABStoragePartThree.md ├── ABStoragePartTwo.md ├── ChatServer.md ├── Chord.md ├── DefaultPartOne.md ├── DefaultPartThree.md ├── DefaultPartTwo.md ├── EducatorsPage.md ├── LatencyBandwidthBufferBloat.md ├── LeftPadPartOne.md ├── LeftPadPartThree.md ├── LeftPadPartTwo.md ├── LinkState.md ├── ParityPartOne.md ├── ParityPartThree.md ├── ParityPartTwo.md ├── PermissionsPartOne.md ├── PermissionsPartThree.md ├── PermissionsPartTwo.md ├── PrivateWritePartOne.md ├── PrivateWritePartTwo.md ├── ProtectFilePartOne.md ├── ProtectFilePartThree.md ├── ProtectFilePartTwo.md ├── SecureTuringCompleteSandboxAttack.md ├── SecureTuringCompleteSandboxChallengeBuild.md ├── SecurityLayerPartOne.md ├── SecurityLayerPartTwo.md ├── SetMaxFileSizePartOne.md ├── SetMaxFileSizePartThree.md ├── SetMaxFileSizePartTwo.md ├── SimpleMapReduce.md ├── SlidingWindow.md ├── StopAndWait.md ├── TakeHome.md ├── UndoPartOne.md ├── UndoPartThree.md ├── UndoPartTwo.md └── Webserver.md ├── Grants.md ├── LICENSE ├── Lind-fuse.md ├── Operating ├── BaseInstallers.md ├── BuildDemokit.md ├── Clearinghouse │ ├── DatabaseSetup.md │ ├── Design.md │ ├── DevelopersNotes.md │ ├── Installation.md │ ├── NodeStatesAndTransitions.md │ ├── Overview.md │ ├── SocialAuth.md │ ├── StartupScripts.md │ ├── StateTransitionsService.md │ ├── XMLRPCAPI.md │ ├── XMLRPCClientLibrary.md │ └── XMLRPCServer.md ├── CustomInstallerBuilder │ ├── API.md │ ├── Installation.md │ └── Usage.md ├── IntegrationTestFramework.md ├── NsisSystemSetup.md ├── SoftwareUpdaterSetup.md ├── Zenodotus.md └── zenodotus_server.md ├── Outdated ├── BundlingSeattle.md ├── CheckAPI.md ├── ContainmentInSeattle.md ├── FutureRepyExceptions.md ├── FutureRepyExceptions │ ├── AddressBindingError.md │ ├── CodeunsafeError.md │ ├── ConnectionRefusedError.md │ ├── FileError.md │ ├── FileInUseError.md │ ├── FileNotFoundError.md │ ├── InternetConnectivityError.md │ ├── LocalIPChanged.md │ ├── LockDoubleReleaseError.md │ ├── NetworkError.md │ ├── PortInUseError.md │ ├── PortRestrictedError.md │ ├── RepyArgumentError.md │ ├── RepyError.md │ ├── RestrictionError.md │ ├── SeekPastEndOfFileError.md │ ├── SocketClosedLocal.md │ ├── SocketClosedRemote.md │ ├── SocketWouldBlockError.md │ └── TimeoutError.md ├── GettingStartedWithAffix.md ├── Island.md ├── ManagingTracTickets.md ├── MobilityShim.md ├── NamingWithShims.md ├── NatIntegration.md ├── NatNodes.md ├── NodeStatusReporter.md ├── PerformanceIsolationBenchmarks.md ├── RemoteTestingService.md ├── RunningSecLayerBenchmarks.md ├── SeattleIrcBot.md ├── SeattleOnAndroid.md ├── SeattleOnNokia.md ├── SeattleOnOpenWrt.md ├── SeattleResources.md ├── ShimExceptionHierarchy.md ├── UpdaterUnitTests.md ├── UsingShims.md ├── WritingShims-RepyV2.md └── advertise_testing.md ├── Programming ├── CodingStyle.md ├── DynamicLinkingModules.md ├── FileApiSemantics.md ├── PortingPythonToRepy.md ├── ProgrammersPage.md ├── PythonTutorial.md ├── PythonVsRepy.md ├── PythonVsRepyV2.md ├── RepyApi.md ├── RepyHelper.md ├── RepyNetworkRestrictions.md ├── RepyTutorial.md ├── RepyV1vsRepyV2.md ├── RepyV2API.md ├── RepyV2SecurityLayers.md ├── RepyV2Tutorial.md ├── SeattleLib_v1 │ ├── AdvertiseObjects.repy.md │ ├── ConcurrencyAndParallelism.md │ ├── Cryptography.md │ ├── DORadvertise.repy.md │ ├── DataEncoding.md │ ├── DataRetrieval.md │ ├── NATLayer_rpc.repy.md │ ├── NAT_advertisement.repy.md │ ├── NetworkCommunication.md │ ├── NodeAdvertising.md │ ├── ProgrammerResources.md │ ├── SeattleLib.md │ ├── Time.md │ ├── UrlParsingAndXml.md │ ├── advertise.repy.md │ ├── argparse.repy.md │ ├── base64.repy.md │ ├── binascii.repy.md │ ├── bundle.repy.md │ ├── bundler.py.md │ ├── centralizedadvertise.repy.md │ ├── cv.repy.md │ ├── dnscommon.repy.md │ ├── domainnameinfo.repy.md │ ├── dylink.repy.md │ ├── geoip_client.repy.md │ ├── getvesselsresources.repy.md │ ├── httpretrieve.repy.md │ ├── httpserver.repy.md │ ├── listops.repy.md │ ├── math.repy.md │ ├── md5py.repy.md │ ├── nmclient.repy.md │ ├── ntp_time.repy.md │ ├── openDHTadvertise.repy.md │ ├── parallelize.repy.md │ ├── priority_queue.repy.md │ ├── pyDes.repy.md │ ├── pycryptorsa.repy.md │ ├── random.repy.md │ ├── repypp.py.md │ ├── repyunit.repy.md │ ├── rsa.repy.md │ ├── safe_eval.repy.md │ ├── semaphore.repy.md │ ├── serialize.repy.md │ ├── servicelookup.repy.md │ ├── session.repy.md │ ├── sha.repy.md │ ├── signeddata.repy.md │ ├── sockettimeout.repy.md │ ├── sshkey.repy.md │ ├── sshkey_paramiko.repy.md │ ├── strace.py.md │ ├── tcp_time.repy.md │ ├── textops.py.md │ ├── time.repy.md │ ├── time_interface.repy.md │ ├── uniqueid.repy.md │ ├── urllib.repy.md │ ├── urlparse.repy.md │ ├── vessellookup.repy.md │ ├── xmlparse.repy.md │ ├── xmlrpc_client.repy.md │ ├── xmlrpc_common.repy.md │ └── xmlrpc_server.repy.md └── SecurityLayers.md ├── README.md ├── Scripts ├── README.md └── auto_grader.py ├── SeattleTalks.md ├── UnderstandingSeattle ├── AcceptableUsePolicy.md ├── AdvertiseServiceDesign.md ├── Architecture.md ├── BenchmarkCustomInstallerInfo.md ├── CodeSafety.md ├── DemoVideo.md ├── DonatingResources.md ├── InstallerDocumentation.md ├── InstallerWorkflow.md ├── NetworkApiSemantics.md ├── NodeManagerDesign.md ├── Privacypolicy.md ├── README.md ├── SeashModules.md ├── SeattleComponents.md ├── SeattleInfrastructureArchitecture.md ├── SeattleShell.md ├── SeattleShellBackend.md └── VirtualNamespace.md └── tractomd.py /ATTACHMENTS/Applications/Autograder/autograder-large.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Applications/Autograder/autograder-large.png -------------------------------------------------------------------------------- /ATTACHMENTS/Applications/Autograder/autograder.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Applications/Autograder/autograder.png -------------------------------------------------------------------------------- /ATTACHMENTS/Applications/Seobinggo/Seobinggo.tar.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Applications/Seobinggo/Seobinggo.tar.gz -------------------------------------------------------------------------------- /ATTACHMENTS/Applications/Viewpoints/viewpoints.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Applications/Viewpoints/viewpoints.zip -------------------------------------------------------------------------------- /ATTACHMENTS/Architecture/nm_repy_arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Architecture/nm_repy_arch.png -------------------------------------------------------------------------------- /ATTACHMENTS/Archive/MeasureTwiceCutOnce/memTest.py: -------------------------------------------------------------------------------- 1 | """ 2 | Tests memory allocation speed 3 | """ 4 | 5 | import threading 6 | import subprocess 7 | import os 8 | import time 9 | 10 | # see if the process is over quota and if so terminate with extreme prejudice. 11 | def mem_use(pid): 12 | # issue this command to ps. This is likely to be non-portable and a source 13 | # of constant ire... 14 | memorycmd = 'ps -p '+str(pid)+' -o rss' 15 | p = subprocess.Popen(memorycmd, shell=True, stdout=subprocess.PIPE, 16 | stderr=subprocess.PIPE, close_fds=True) 17 | 18 | cmddata = p.stdout.read() 19 | p.stdout.close() 20 | errdata = p.stderr.read() 21 | p.stderr.close() 22 | junkstatus = os.waitpid(p.pid,0) 23 | 24 | # ensure the first line says RSS (i.e. it's normal output 25 | if 'RSS' == cmddata.split('\n')[0].strip(): 26 | 27 | # PlanetLab proc handling 28 | badproccount = 0 29 | 30 | # remove the first line 31 | memorydata = cmddata.split('\n',1)[1] 32 | 33 | # they must have died 34 | if not memorydata: 35 | return 36 | 37 | # the answer is in KB, so convert! 38 | memoryused = int(memorydata) 39 | 40 | return memoryused 41 | else: 42 | raise Exception, "Cannot understand '"+memorycmd+"' output: '"+cmddata+"'" 43 | 44 | totalcount = 0 45 | totalspeed = 0 46 | 47 | class MemInfoThread(threading.Thread): 48 | frequency = 0.05 49 | 50 | def __init__(self): 51 | threading.Thread.__init__(self) 52 | 53 | def run(self): 54 | global totalspeed, totalcount 55 | pid = os.getpid() 56 | start = time.time() 57 | memlast = mem_use(pid) 58 | timelast = time.time() 59 | 60 | 61 | print "PID: ", str(pid) 62 | 63 | while True: 64 | mem = mem_use(pid) 65 | speed = (mem-memlast)/(time.time() - timelast) 66 | print str(time.time() - start), "Mem: ", str(mem), "Kb/s: ", str(speed) 67 | memlast = mem 68 | timelast = time.time() 69 | totalspeed = totalspeed + speed 70 | totalcount = totalcount + 1 71 | time.sleep(self.frequency) 72 | 73 | 74 | class MemUseThread(threading.Thread): 75 | def __init__(self): 76 | threading.Thread.__init__(self) 77 | 78 | def run(self): 79 | arr = [] 80 | while True: 81 | # Use lots of memory very fast 82 | arr.append(42) 83 | 84 | # Monitor mem usage 85 | thread = MemInfoThread() 86 | thread.start() 87 | 88 | # Go crazy! 89 | thread2 = MemUseThread() 90 | thread2.start() 91 | 92 | # Only allow running for a few seconds to keep system functional 93 | time.sleep(4) 94 | 95 | # Print the results 96 | print "Avg Kb/s: ", str(totalspeed/totalcount) 97 | 98 | # Force quit 99 | exit() 100 | -------------------------------------------------------------------------------- /ATTACHMENTS/Archive/SleightOfHand/NAT%20Doc.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Archive/SleightOfHand/NAT%20Doc.pdf -------------------------------------------------------------------------------- /ATTACHMENTS/Archive/SleightOfHand/NAT_traversal.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Archive/SleightOfHand/NAT_traversal.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/ContactableVsAdvertising/contactablevsadvertising.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/ContactableVsAdvertising/contactablevsadvertising.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/CumulativeTimeOnline/cumulativetimeonline.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/CumulativeTimeOnline/cumulativetimeonline.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/CumulativeUptime/cumulativeuptime.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/CumulativeUptime/cumulativeuptime.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/DelaySpace/allpairsping.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/DelaySpace/allpairsping.png -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/DelaySpace/conquer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/DelaySpace/conquer.png -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/DelaySpace/controller.repy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/DelaySpace/controller.repy -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/DelaySpace/measurements.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/DelaySpace/measurements.png -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/DelaySpace/node.repy: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/DelaySpace/node.repy -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/MobileNodes/mobilenodes.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/MobileNodes/mobilenodes.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/NodesOnline/nodesonline.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/NodesOnline/nodesonline.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/CollectedNodeData/UnstableNodes/unstablenodes.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CollectedNodeData/UnstableNodes/unstablenodes.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/ContainmentInSeattle/ContainmentInSeattle.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/ContainmentInSeattle/ContainmentInSeattle.pdf -------------------------------------------------------------------------------- /ATTACHMENTS/CustomInstallerBuilder/cib-update.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CustomInstallerBuilder/cib-update.png -------------------------------------------------------------------------------- /ATTACHMENTS/CustomInstallerBuilder/cib.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CustomInstallerBuilder/cib.png -------------------------------------------------------------------------------- /ATTACHMENTS/CustomInstallerBuilder/customized_installer_builder.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/CustomInstallerBuilder/customized_installer_builder.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/LinkState/build_connectivity_map.py: -------------------------------------------------------------------------------- 1 | """ 2 | 3 | build_connectivity_map.py 4 | 5 | 6 | November 17, 2008 7 | 8 | 9 | ivan@cs.washington.edu 10 | Ivan Beschastnikh 11 | 12 | Rewrite by Justin Cappos on Nov 6th, 2010. 13 | 14 | 15 | Builds a simple, random connectivity mesh between a set of nodes. 16 | 17 | Takes a file with hostnames and outputs a list of lines that look like: 18 | srcip1 destip1 19 | srcip2 destip2 20 | ... 21 | 22 | The output graph is asymmetric, so if you see srcip1 destip1, you won't see 23 | destip1 srcip1. The reason for this is to make it easier for the person 24 | using the graph to decide which node should initiate the TCP connection. 25 | """ 26 | 27 | 28 | 29 | 30 | import sys 31 | import random 32 | 33 | 34 | 35 | def build_map(hosts): 36 | """ 37 | Takes a list of hosts and returns a list of tuples of hosts. The graph 38 | will be connected and have between n-1 and n*n-1 / 4 edges. 39 | """ 40 | conn_map = [] 41 | random.shuffle(hosts) 42 | numhosts = len(hosts) 43 | 44 | # create an initial chain of connected nodes so that our map is 45 | # guaranteed to be connected no matter what happens 46 | for hostpos in range(numhosts - 1): 47 | conn_map.append((hosts[hostpos], hosts[hostpos+1])) 48 | 49 | 50 | # This should mean that at most 1/2 of the edges will be in the graph. 51 | # We previously added numhosts -1 52 | for linkattemptcount in range(numhosts*(numhosts-1)/4 - numhosts -1): 53 | 54 | # choose hosts randomly... 55 | host1 = random.sample(hosts,1)[0] 56 | host2 = random.sample(hosts,1)[0] 57 | 58 | # no links to the same node... 59 | if host1 == host2: 60 | continue 61 | # the link exists. 62 | if (host1,host2) in conn_map or (host2,host1) in conn_map: 63 | continue 64 | 65 | conn_map.append((host1,host2)) 66 | 67 | return conn_map 68 | 69 | if __name__ == "__main__": 70 | if len(sys.argv) != 2: 71 | print "usage: " + sys.argv[0] +" [filename]" 72 | sys.exit(1) 73 | 74 | # read the hosts file containing one hostname/ip per line 75 | f = open(sys.argv[1],"r") 76 | lines = f.readlines() 77 | f.close() 78 | hosts = [] 79 | for line in lines: 80 | hosts.append(line.strip()) 81 | 82 | # build the connectivity map 83 | conn_map = build_map(hosts) 84 | 85 | conn_map.sort() 86 | # output the connectivity map to stdout 87 | for src_host, dst_host in conn_map: 88 | print str(src_host) + " " + str(dst_host) 89 | -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/PermissionsPartOne/repyv2_commit_3499642.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/EducationalAssignments/PermissionsPartOne/repyv2_commit_3499642.zip -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/PrivateWritePartOne/repyv2_commit_3499642.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/EducationalAssignments/PrivateWritePartOne/repyv2_commit_3499642.zip -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/SecureTuringCompleteSandboxAttack/CandidateSandboxes.txt: -------------------------------------------------------------------------------- 1 | Group 1: 2 | https://github.com/WilsonLiCode/SecureTuringCompleteSandbox 3 | https://github.com/piyushbjadhav/Sandbox 4 | https://github.com/CallMeSteve/AppSec/tree/master/Assignment%201%20Sandbox 5 | https://github.com/abhinav1911/Assignment1 6 | https://github.com/Justinvalcarcel/CS9163 7 | https://github.com/aot221/SandboxEnvironment 8 | 9 | Group 2: 10 | https://github.com/ceinfo/ApplicationSecurity 11 | https://github.com/mramdass/Turing_Complete_Sandbox 12 | https://github.com/fjm266/appSec1 13 | https://github.com/kellender/Secure_Turing_Complete_Sandbox- 14 | https://github.com/crimsonBeard/App-Sec.git 15 | https://github.com/PankajMoolrajani/python-sandbox.git 16 | -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/SecureTuringCompleteSandboxAttack/a-sandbox.py: -------------------------------------------------------------------------------- 1 | # This sandbox will only execute programs that contain the following characters 2 | allowedchars= "a0123456789=-*/+!<>:() \n\"A" 3 | # When it finishes, it will print whatever is in the variable a 4 | 5 | import sys 6 | 7 | 8 | programtext = open('a').read() 9 | for char in programtext: 10 | if char not in allowedchars: 11 | print "Error, not an 'a' program!" 12 | sys.exit(1) 13 | 14 | programtext = programtext.replace("A0", "while") 15 | 16 | a=None 17 | A = {'A':str} 18 | exec(programtext) in A 19 | print A['a'] 20 | 21 | -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/SecureTuringCompleteSandboxAttack/easytocode.py: -------------------------------------------------------------------------------- 1 | # Author: Justin Cappos 2 | # Purpose: silly little python secure VM (attempt). This is meant to be fast 3 | # for me to code. 4 | 5 | # I'm just going to read this in, replace the globals / locals so they are 6 | # (almost) blank and strip out anything I can think might be an issue 7 | 8 | import sys 9 | filedata = open('silly.input','rb').read() 10 | 11 | FORBIDDEN_STRINGS = ['import', '.', '_', 'class', '>>', 'exec', 'assert','@','lambda','<<','slice','yield','try','except','global'] 12 | 13 | for forbidden_string in FORBIDDEN_STRINGS: 14 | if forbidden_string in filedata: 15 | print 'Cannot have "'+forbidden_string+'" in program.' 16 | sys.exit(1) 17 | 18 | namespace = {} 19 | namespace['__builtins__'] = {'False':False, 'True':True, 'None':None} 20 | 21 | exec filedata in namespace 22 | -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/SecureTuringCompleteSandboxAttack/potentiallyhackablesandbox.py: -------------------------------------------------------------------------------- 1 | import sys 2 | 3 | filedata = open('simple.file','rb').read() 4 | 5 | 6 | def safecall(programdata): 7 | BAD_STRINGS = ['import', 'globals', 'class'] 8 | 9 | for badstring in BAD_STRINGS: 10 | if badstring in programdata: 11 | print 'Error, cannot have "'+badstring+'" in program.' 12 | sys.exit(1) 13 | 14 | 15 | newbuiltins = {'None':None, 'False':False, 'True':True, 'range':range, '__builtins__':None, 'safecall':safecall} 16 | 17 | exec programdata in newbuiltins 18 | 19 | safecall(filedata) 20 | -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/SecurityLayerPartOne/repyv2_commit_3499642.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/EducationalAssignments/SecurityLayerPartOne/repyv2_commit_3499642.zip -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/TakeHome/Non-transitive%20connectivity%20image.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/EducationalAssignments/TakeHome/Non-transitive%20connectivity%20image.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/EducationalAssignments/WebServer/thewall.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 19 |
Mike : Hello
Me : Hi

13 |
14 | Name:
15 | Comment:
16 | 17 |
18 |
20 | 21 | 22 | 23 | -------------------------------------------------------------------------------- /ATTACHMENTS/GEC10/restrictions.default: -------------------------------------------------------------------------------- 1 | resource cpu .10 2 | resource memory 15000000 # 15 Million bytes 3 | resource diskused 100000000 # 100 MB 4 | resource events 10 5 | resource filewrite 100000 6 | resource fileread 100000 7 | resource filesopened 5 8 | resource insockets 5 9 | resource outsockets 5 10 | resource netsend 10000 11 | resource netrecv 10000 12 | resource loopsend 1000000 13 | resource looprecv 1000000 14 | resource lograte 30000 15 | resource random 100 16 | resource messport 12345 17 | resource connport 12345 18 | 19 | 20 | 21 | 22 | call gethostbyname_ex allow 23 | call sendmess allow 24 | call stopcomm allow # it doesn't make sense to restrict 25 | call recvmess allow 26 | call openconn allow 27 | call waitforconn allow 28 | call socket.close allow # let's not restrict 29 | call socket.send allow # let's not restrict 30 | call socket.recv allow # let's not restrict 31 | # open and file.__init__ both have built in restrictions... 32 | call open arg 0 is junk_test.out allow # can write to junk_test.out 33 | call open arg 1 is r allow # allow an explicit read 34 | call open arg 1 is rb allow # allow an explicit read 35 | call open noargs is 1 allow # allow an implicit read 36 | call file.__init__ arg 0 is junk_test.out allow # can write to junk_test.out 37 | call file.__init__ arg 1 is r allow # allow an explicit read 38 | call file.__init__ arg 1 is rb allow # allow an explicit read 39 | call file.__init__ noargs is 1 allow # allow an implicit read 40 | call file.close allow # shouldn't restrict 41 | call file.flush allow # they are free to use 42 | call file.next allow # free to use as well... 43 | call file.read allow # allow read 44 | call file.readline allow # shouldn't restrict 45 | call file.readlines allow # shouldn't restrict 46 | call file.seek allow # seek doesn't restrict 47 | call file.write allow # shouldn't restrict (open restricts) 48 | call file.writelines allow # shouldn't restrict (open restricts) 49 | call sleep allow # harmless 50 | call settimer allow # we can't really do anything smart 51 | call canceltimer allow # should be okay 52 | call exitall allow # should be harmless 53 | 54 | call log.write allow 55 | call log.writelines allow 56 | call getmyip allow # They can get the external IP address 57 | call listdir allow # They can list the files they created 58 | call removefile allow # They can remove the files they create 59 | call randomfloat allow # can get random numbers 60 | call getruntime allow # can get the elapsed time 61 | call getlock allow # can get a mutex 62 | call get_thread_name allow # Allow getting the thread name 63 | call VirtualNamespace allow # Allow using VirtualNamespace's 64 | 65 | -------------------------------------------------------------------------------- /ATTACHMENTS/Libraries/ExperimentLibrary/seattlegeni_advertisement.publickey: -------------------------------------------------------------------------------- 1 | 65537 104283973845452278473567059872058302181099306478946860695753925866960062455387034090984928649172368336895511957180608166358198358557811956058533160134655085887217281584650941950088412008071410745320003819243027473383767411456759901168591653498109515401427898370664550473756850087580169500147037740069933812133 -------------------------------------------------------------------------------- /ATTACHMENTS/Libraries/Tcup/tcp.tgz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Libraries/Tcup/tcp.tgz -------------------------------------------------------------------------------- /ATTACHMENTS/Libraries/Tcup/tcup.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Libraries/Tcup/tcup.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/Lind/LindDesignDocument%20%281%29.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Lind/LindDesignDocument%20%281%29.png -------------------------------------------------------------------------------- /ATTACHMENTS/Lind/LindDesignDocument.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Lind/LindDesignDocument.png -------------------------------------------------------------------------------- /ATTACHMENTS/Local/RepoAccess/branches_sm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Local/RepoAccess/branches_sm.png -------------------------------------------------------------------------------- /ATTACHMENTS/Local/RepoAccess/fork.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Local/RepoAccess/fork.png -------------------------------------------------------------------------------- /ATTACHMENTS/Local/RepoAccess/fork_sm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Local/RepoAccess/fork_sm.png -------------------------------------------------------------------------------- /ATTACHMENTS/Local/RepoAccess/pull-request.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Local/RepoAccess/pull-request.png -------------------------------------------------------------------------------- /ATTACHMENTS/Local/RepoAccess/pull-request_sm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/Local/RepoAccess/pull-request_sm.png -------------------------------------------------------------------------------- /ATTACHMENTS/Local/VersionDeployment/blackbox-push-update-output.txt: -------------------------------------------------------------------------------- 1 | jsamuel@blackbox:/home/release$ ./push_update_to_all_clients.sh 2 | [sudo] password for jsamuel: 3 | Backing up /var/www/updatesite to /var/www/updatesite.backups/1254849680 4 | Done. 5 | -------------------------------------------------------------------------------- /ATTACHMENTS/Local/VersionDeployment/blackbox-rebuild-base-installers-output.txt: -------------------------------------------------------------------------------- 1 | jsamuel@blackbox:/home/release$ ./rebuild_base_installers_for_seattlegeni.sh 0.1m 2 | Archiving old base installers to /var/www/dist/old_base_installers 3 | Warning: failure after this point may leave seattlegeni with no base installers! 4 | Building new base installers at /var/www/dist 5 | Creating installer(s) - this may take a few moments.... 6 | Preparing all general non-OS-specific files.... 7 | Complete. 8 | Customizing installer(s) for the specified operating system(s).... 9 | 10 | Finished. 11 | 12 | The following base installers have been placed in /var/www/dist: 13 | seattle0.1m_win.zip 14 | seattle0.1m_linux.tgz 15 | seattle0.1m_mac.tgz 16 | seattle0.1m_winmob.zip 17 | Changing base installer symlinks used by seattlegeni. 18 | /var/www/dist /home/release 19 | /home/release 20 | New base installers created and installed for seattlegeni. 21 | -------------------------------------------------------------------------------- /ATTACHMENTS/NatNodes/NatNodes.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/NatNodes/NatNodes.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/NetworkApiSemantics/socketstate.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/NetworkApiSemantics/socketstate.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/PerformanceIsolation/linux.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/PerformanceIsolation/linux.zip -------------------------------------------------------------------------------- /ATTACHMENTS/PerformanceIsolation/macos.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/PerformanceIsolation/macos.zip -------------------------------------------------------------------------------- /ATTACHMENTS/PerformanceIsolation/windows.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/PerformanceIsolation/windows.zip -------------------------------------------------------------------------------- /ATTACHMENTS/ProtoGeniIntegration/protogeni_integration.tgz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/ProtoGeniIntegration/protogeni_integration.tgz -------------------------------------------------------------------------------- /ATTACHMENTS/PythonTutorial/restrictions.test: -------------------------------------------------------------------------------- 1 | resource cpu .50 2 | resource memory 20000000 # 20 Million bytes 3 | resource diskused 10000000 # 10 MB 4 | resource events 10 5 | resource filewrite 10000 6 | resource fileread 10000 7 | resource filesopened 5 8 | resource insockets 5 9 | resource outsockets 5 10 | resource netsend 10000 11 | resource netrecv 10000 12 | resource loopsend 1000000 13 | resource looprecv 1000000 14 | resource lograte 30000 15 | resource random 100 16 | resource messport 12345 # use for getting an NTP update 17 | resource messport 12346 18 | resource connport 12345 19 | 20 | call gethostbyname_ex allow 21 | call sendmess allow # the local port type 22 | call stopcomm allow # it doesn't make sense to restrict 23 | call recvmess allow # Allow listening on this port 24 | call openconn allow # allow connections to this port 25 | call waitforconn allow # allow listening on this port 26 | call socket.close allow # let's not restrict 27 | call socket.send allow # let's not restrict 28 | call socket.recv allow # let's not restrict 29 | # open and file.__init__ both have built in restrictions... 30 | call open allow # can write to junk_test.out 31 | call file.__init__ allow # can write to junk_test.out 32 | call file.close allow # shouldn't restrict 33 | call file.flush allow # they are free to use 34 | call file.next allow # free to use as well... 35 | call file.read allow # allow read 36 | call file.readline allow # shouldn't restrict 37 | call file.readlines allow # shouldn't restrict 38 | call file.seek allow # seek doesn't restrict 39 | call file.write allow # shouldn't restrict (open restricts) 40 | call file.writelines allow # shouldn't restrict (open restricts) 41 | call sleep allow # harmless 42 | call settimer allow # we can't really do anything smart 43 | call canceltimer allow # should be okay 44 | call exitall allow # should be harmless 45 | 46 | call log.write allow 47 | call log.writelines allow 48 | call getmyip allow # They can get the external IP address 49 | call listdir allow # They can list the files they created 50 | call removefile allow # They can remove the files they create 51 | call randomfloat allow # can get random numbers 52 | call getruntime allow # can get the elapsed time 53 | call getlock allow # can get a mutex 54 | -------------------------------------------------------------------------------- /ATTACHMENTS/RepoConfigurationForWindows/Create%20key.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepoConfigurationForWindows/Create%20key.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepoConfigurationForWindows/WinSCP%20main%20dialog.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepoConfigurationForWindows/WinSCP%20main%20dialog.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepoConfigurationForWindows/putty%20-%20Accept%20key.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepoConfigurationForWindows/putty%20-%20Accept%20key.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepoConfigurationForWindows/puttygen%20-%20add%20comment.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepoConfigurationForWindows/puttygen%20-%20add%20comment.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepoConfigurationForWindows/puttygen%20-%20no%20passphrase.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepoConfigurationForWindows/puttygen%20-%20no%20passphrase.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepoConfigurationForWindows/winscp%20-%20Create%20Folder.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepoConfigurationForWindows/winscp%20-%20Create%20Folder.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepoConfigurationForWindows/winscp%20-%20connect%20dialog.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepoConfigurationForWindows/winscp%20-%20connect%20dialog.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepoConfigurationForWindows/winscp%20-%20warning%20dialog.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepoConfigurationForWindows/winscp%20-%20warning%20dialog.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepyMapReduce/stage3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepyMapReduce/stage3.PNG -------------------------------------------------------------------------------- /ATTACHMENTS/RepyTutorial/SetTimer.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepyTutorial/SetTimer.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/RepyTutorial/clearinghouseport.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepyTutorial/clearinghouseport.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepyTutorial/example.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepyTutorial/example.zip -------------------------------------------------------------------------------- /ATTACHMENTS/RepyTutorial/geniport.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepyTutorial/geniport.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/RepyV2Tutorial/clearinghouseport.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepyV2Tutorial/clearinghouseport.png -------------------------------------------------------------------------------- /ATTACHMENTS/RepyV2Tutorial/example.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/RepyV2Tutorial/example.zip -------------------------------------------------------------------------------- /ATTACHMENTS/SeattleGeniDesign/seattlegeni_backend.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/SeattleGeniDesign/seattlegeni_backend.gif -------------------------------------------------------------------------------- /ATTACHMENTS/SeattleGeniDesign/seattlegeni_website.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/SeattleGeniDesign/seattlegeni_website.gif -------------------------------------------------------------------------------- /ATTACHMENTS/SeattleOnAndroid/seattle-on-android.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/SeattleOnAndroid/seattle-on-android.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/TryRepy/try-repy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/TryRepy/try-repy.png -------------------------------------------------------------------------------- /ATTACHMENTS/UnderstandingSeattle/SeattleComponents/SeattlePicture1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/UnderstandingSeattle/SeattleComponents/SeattlePicture1.jpg -------------------------------------------------------------------------------- /ATTACHMENTS/UnderstandingSeattle/SeattleComponents/componentdiagram.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/UnderstandingSeattle/SeattleComponents/componentdiagram.gif -------------------------------------------------------------------------------- /ATTACHMENTS/huxiang/huxiang_webserver.tar.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/huxiang/huxiang_webserver.tar.gz -------------------------------------------------------------------------------- /ATTACHMENTS/huxiang/huxiang_webserver_debug.tar.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SeattleTestbed/docs/6bf1b8dbb89ce4fb8b9b4cd18fe1cc386f3236fd/ATTACHMENTS/huxiang/huxiang_webserver_debug.tar.gz -------------------------------------------------------------------------------- /Applications/Autograder/Database.md: -------------------------------------------------------------------------------- 1 | **Entities and actions** 2 | 3 | **Prof** 4 | ``` 5 | Create/Edit Assignment 6 | Delete Assignment 7 | Grade 8 | one student 9 | all students 10 | Re-grade 11 | Create / Manage students 12 | Download student submissions 13 | Log_in 14 | ``` 15 | 16 | 17 | **Student** 18 | ``` 19 | Submit / re-submit solution 20 | Log in 21 | See previous submissions 22 | ``` 23 | 24 | 25 | **Dev** 26 | ``` 27 | Create prof 28 | log in 29 | ``` 30 | 31 | 32 | 33 | **Grading Hierchy** 34 | 35 | ``` 36 | Class D 37 | Assign 1 38 | Student A (11/12) 39 | Solution A (11/12) 40 | TestSuite A (5/6) 41 | TestSuiteB (6/6) 42 | testcase1 check 43 | testcase2 X 44 | testcase3 not graded 45 | 46 | 47 | ``` 48 | 49 | 50 | 51 | 52 | **Database relations** 53 | 54 | 55 | Prof : email, pws, ''classes'', name 56 | 57 | Student: email pws, ''classes, solutions'' 58 | 59 | Class: desc, ''assignments, prof, students'' 60 | 61 | Assignment: Desc, deadline, ''testsuites'' 62 | 63 | Solution: date submitted, code, ''student, assignment'' 64 | 65 | Grade: date graded, grade, output, ''solution, testcase'' 66 | 67 | Test Suite: nsfile, ''assignment'' 68 | 69 | Test Case: desc ''nsfile'' 70 | 71 | TestCaseNSFileMap: node, code, student filename,'' test case.'' 72 | 73 | ToGrade : Solution, status 74 | ``` 75 | "submittted for grading" 76 | "grading..." 77 | "grading suite 2/5" 78 | ``` 79 | -------------------------------------------------------------------------------- /Applications/GeoIpServer.md: -------------------------------------------------------------------------------- 1 | # Running a GeoIP Server 2 | 3 | In order to allow geolocation lookups using the `geoip_client.repy` library, we have provided a GeoIP XML-RPC server for remote procedure call. While an instance of the server is running persistently at !http://geoipserver.poly.edu:12679 and !http://geoipserver2.poly.edu:12679, the server program is available at `geoip_server/geoip_server.py`. 4 | 5 | ## Usage 6 | ``` 7 | $ python geoip_server.py /path/to/GeoIP.dat PORT 8 | ``` 9 | 10 | `geoip_server.py` requires the Python library [pygeoip](https://pypi.python.org/pypi/pygeoip/) and a valid MaxMind geolocation database, such as [GeoLite City](http://www.maxmind.com/app/geolitecity). The command-line arguments to geoip_server.py are the path to the GeoIP.dat file and the port on which to host the server. 11 | 12 | ## To Start the server 13 | Log in to the appropriate GeoIP server machine and make sure you have the latest version of the pygeoip module installed. If not, install it 14 | ``` 15 | $ sudo pip install pygeoip 16 | ``` 17 | Now, run the program from the geoipserver account as follows: 18 | ``` 19 | $ screen -S geoipserver -d -m sh /home/geoipserver/start_geoip_service.sh 20 | ``` 21 | 22 | Once the server is up and running, you can connect to it using the `geoip_client.repy` function `geoip_init_client()`, passing in the address (!http://HOSTNAME:PORT) of the server. 23 | 24 | ## Documentation 25 | Please refer to [browser:seattle/trunk/geoip_server/geoip_server.py the file itself] for further information. -------------------------------------------------------------------------------- /Archive/AutograderCodeSprint.md: -------------------------------------------------------------------------------- 1 | # Autograder Code Sprint Strategy 2 | 3 | This page will hopefully get us more organized in our approach in adding automatic grading support for the Seattle release. Anyone is welcome to edit this page in a constructive manner in spirit of conversation! 4 | 5 | ## Completed work 6 | * Initial Demo version of the autograder up and running. This was a good starting point and now we are ready to work on the production version! We have lots of bugs to fix, and features to design. 7 | 8 | ## Plan of Attack 9 | * Features required for this release are listed at https://seattle.poly.edu/milestone/Autograder%20v1 10 | * Design and implement a way for course staff to specify an NS file, and ensure it meets our criteria for launching experiments with the autograder. 11 | * Design and implement a way for course staff to specify test meta-data, specifically a mapping of repy-files to emulab nodes for each test case. This should integrate well with our system for specifying NS files 12 | * Re-design the data-management layer to use a database instead of files and directories 13 | * Improve output of grade functions 14 | * Fix bugs and add tests that ensure good integration between the autograder logic and nm_remote_api 15 | 16 | 17 | # Current Codemonkeys 18 | 19 | * Eric Kimbrel 20 | * Alper Sarikaya 21 | * Sal Bagaveyev 22 | * Jenn Hanson -------------------------------------------------------------------------------- /Archive/BotMaster.md: -------------------------------------------------------------------------------- 1 | # BotMaster 2 | * Reading remote logs 3 | * "service interface" on nodes 4 | * backup software update mechanism. 5 | 6 | # Overview 7 | 8 | This team focuses on interacting with systems our software is installed on. It's essential that the operations we perform are not considered "invasive" by the user. 9 | 10 | ## Reading Remote Logs 11 | 12 | Reading remote node manager / software updater logs. 13 | 14 | Coding Sprint: Jan 24th 15 | 16 | Jan 10th - Jan 17th 17 | 18 | Sean: Understand circular logging as used in Repy and try to change the software updater to use it. 19 | 20 | Brent: Look to add circular logging to the node manager and look at handling exceptions in a good way. 21 | 22 | Jan 17th - Jan 22nd 23 | 24 | Both: Ensure logging works for your application and test that files can be retrieved from our "service VM" 25 | 26 | -------------------------------------------------------------------------------- /Archive/CollectedNodeData.md: -------------------------------------------------------------------------------- 1 | # Graphs of Seattle's Nodes 2 | 3 | This page links to graphs showing various properties of Seattle nodes along with short explanations of the graphs. 4 | 5 | ---- 6 | 7 | ---- 8 | 9 | 10 | 11 | ## Node Monitoring Graphs 12 | ---- 13 | 14 | These graphs were compiled from data collected between February 24, 2011 and April 6, 2011. The data was collected using the [wiki:Libraries/ExperimentLibrary Experiment Library] to monitor the nodes advertising under the Seattle Clearinghouse node announcement public key. The monitoring script determined whether nodes were actually contactable by attempting to browse the node whenever is started or stopped advertising. 15 | 16 | * [wiki:CollectedNodeData/NodesOnline Number of Nodes Online Over Time] 17 | * [wiki:CollectedNodeData/CumulativeTimeOnline Cumulative Proportion of Time Online] 18 | * [wiki:CollectedNodeData/ContactableVsAdvertising Scatter Plot of Proportions of Time Contactable and Advertising] 19 | * [wiki:CollectedNodeData/MobileNodes Nodes Which Changed IP Address] 20 | * [wiki:CollectedNodeData/CumulativeUptime Cumulative Time Contactable of Unstable Nodes] 21 | * [wiki:CollectedNodeData/UnstableNodes Advertising and Contactability Patterns of Unstable Nodes] 22 | 23 | 24 | 25 | ## Other Graphs and Data 26 | ---- 27 | 28 | If you have wiki access, feel free to add to this section any interesting graphs or other data you have. To add your graph to the wiki, create a subpage of this page (i.e. something along the lines of CollectedNodeData/YourGraphName). The page should have a heading, the graph, a several sentence description explaining the graph, and a line saying who collected the data and when it was collected. For examples, see any of the graphs in the above section. Then, add a bullet point and a link with a descriptive title to this section. 29 | 30 | * [wiki:CollectedNodeData/DelaySpace Exploring the Internet Delay Space with Seattle (Includes an advanced allpairs-ping implementation)] -------------------------------------------------------------------------------- /Archive/CollectedNodeData/ContactableVsAdvertising.md: -------------------------------------------------------------------------------- 1 | # Scatter Plot of Proportions of Time Contactable and Advertising 2 | 3 | [[Image(contactablevsadvertising.jpg)]] 4 | 5 | This graph illustrates the high correlation between time spent advertising and time spent contactable. For the most part, the two values are the same. However, a number of nodes are contactable slightly more often than they are advertising and some IP addresses spent a significant proportion of the time advertising without actually being contactable. 6 | 7 | Data collected by sportzer between 2/24/2011 and 4/6/2011 -------------------------------------------------------------------------------- /Archive/CollectedNodeData/CumulativeTimeOnline.md: -------------------------------------------------------------------------------- 1 | # Cumulative Proportion of Time Online 2 | 3 | [[Image(cumulativetimeonline.jpg)]] 4 | 5 | This graph shows the cumulative distribution of the proportion of time spent online. This is measured as a fraction of the time from when the node was first seen advertising to the last time recorded in the log since otherwise large numbers of new node coming online would create artifacts in the graph. For reference, the total number of nodes referenced in the graph is 1030. Therefore, about 400 nodes were online almost the entire time, and around 200 nodes were online at most momentarily (87 correspond to IP addresses that were, in fact, never contactable and so aren't necessarily all distinct nodes). Another thing to node is that for the most part, the distributions for time advertising and time contactable are approximately the same, but there were more nodes advertising for short proportions of the time than nodes contactable for the same proportion. 6 | 7 | Data collected by sportzer between 2/24/2011 and 4/6/2011 -------------------------------------------------------------------------------- /Archive/CollectedNodeData/CumulativeUptime.md: -------------------------------------------------------------------------------- 1 | # Cumulative Time Contactable of Unstable Nodes 2 | 3 | [[Image(cumulativeuptime.jpg)]] 4 | 5 | This graph shows cumulative time spent online for each unstable node (as defined as going offline more than 16 time in during the monitoring period). The lines have the slope of the lines on top when the node is contactable and are flat when the node is not contactable. Even the unstable nodes are clustered towards being online most of the time, but a few were rarely online or display odd slopes that indicate the node frequently switched between being contactable and not contactable. 6 | 7 | 8 | Data collected by sportzer between 2/24/2011 and 4/6/2011 -------------------------------------------------------------------------------- /Archive/CollectedNodeData/MobileNodes.md: -------------------------------------------------------------------------------- 1 | # Nodes which Changed IP Address 2 | 3 | [[Image(mobilenodes.jpg)]] 4 | 5 | This graph displays lines corresponding to the periods of time when the selected nodes were online, and uses different colors to give an idea of when nodes changed IP address. The graph shows that Seattle has some mobile node, although some of the node on in the graph probably just have dynamically assigned IP addresses. Also visible in this graph are some nice diurnal cycles. One last point on interest is that while the graph clearly shows it isn't safe to assume that a node will always have the same IP address, less obvious is that a given advertising IP address won't necessarily correspond to the same node over time. The bottom two rows, for example, are nodes in the same network which switched IP addresses when they came back online. 6 | 7 | 8 | Data collected by sportzer between 2/24/2011 and 4/6/2011 -------------------------------------------------------------------------------- /Archive/CollectedNodeData/NodesOnline.md: -------------------------------------------------------------------------------- 1 | # Number of Nodes Online Over Time 2 | 3 | [[Image(nodesonline.jpg)]] 4 | 5 | This graph shows for any given point in time the number of Seattle production node advertising, contactable, and the number which are both advertising and contactable. As can be seen, these different methods of estimating the number of Seattle node online produce roughly the same result. While the number of active Seattle nodes fluctuates somewhat, overall it is fairly stable. The exception is at the sudden jumps in numbers, which are caused by available PlanetLab nodes being restarted, new collections of nodes coming online, or other similar events. 6 | 7 | Data collected by sportzer between 2/24/2011 and 4/6/2011 -------------------------------------------------------------------------------- /Archive/CollectedNodeData/UnstableNodes.md: -------------------------------------------------------------------------------- 1 | # Advertising and Contactability Patterns of Unstable Nodes 2 | 3 | [[Image(unstablenodes.jpg)]] 4 | 5 | This graph shows the high correlation between advertising nodes and contactable nodes in addition to demonstrating that many of the unstable nodes have trouble advertising. The lines are periods of time online, and the diamonds are endpoints of those periods. The fuzzy black areas are an artifact created by numerous diamonds overlapping each other. Diamonds that occur in the middle of lines are points where the node stopped advertising (and possibly also couldn't be contacted) for a relatively brief period of time. There are noticeably more such points for the advertising data than the contactability data, suggesting that the unstable nodes have some trouble advertising successfully. 6 | 7 | Data collected by sportzer between 2/24/2011 and 4/6/2011 -------------------------------------------------------------------------------- /Archive/CustominstallerBuilderTesting.md: -------------------------------------------------------------------------------- 1 | # Testing the Custom Installer Builder 2 | This page outlines how to deploy the [wiki:CustomInstallerBuilder Custom Installer Builder] using Django's built-in test server on a Linux system. For production-level deployment on a real web server, reference [wiki:Archive/CustomInstallerBuilderDeploymentWithModPython these instructions] instead. 3 | 4 | We'll assume that you have completed the previous steps to [wiki:Archive/CustomInstallerBuilderInstallation install], [wiki:CustomInstallerBuilderConfiguration configure], and [wiki:CustomInstallerBuilderCustomizationAndBuild customize] your local Custom Installer Builder already. 5 | 6 | ---- 7 | 8 | ---- 9 | ## Revisit Django settings 10 | 11 | Under the Custom Installer Builder's user account, edit `custominstallerbuilder/local/settings.py` to match your local configuration. Ensure that the following items are set up correctly: 12 | 13 | ``` 14 | SERVE_STATIC = True 15 | ... 16 | BASE_URL = 'http://your-actual-custominstallerbuilder-test-url:PORT/' # Note the trailing slash! 17 | ... 18 | PROJECT_URL = BASE_URL 19 | ``` 20 | 21 | If you are testing locally, you will use ```http://127.0.0.1:PORT/``` for the `BASE_URL`. You may use your public facing IP if you want it to be accessible via the Internet. ```PORT``` must be an open port number greater than 1024. For smaller port numbers, administrative privileges are required. 22 | 23 | 24 | 25 | ## Start Django test server 26 | Ensure that the environment variable `PYTHONPATH` includes the Repy runtime directory. Then, from the `~/custominstallerbuilder/` directory, run the Django test server: 27 | 28 | ```sh 29 | $ export PYTHONPATH=$PYTHONPATH:/home/cib/custominstallerbuilder/repy_runtime 30 | $ cd ~/custominstallerbuilder 31 | $ ./manage.py runserver 0.0.0.0:PORT # This will log some information to the prompt 32 | Validating models... 33 | 34 | 0 errors found. 35 | Django version 1.3.5, using settings 'local.settings' 36 | Development server is running at http://0.0.0.0:8080/ 37 | Quit the server with CONTROL-C. 38 | ``` 39 | 40 | You should now be able to access your Custom Installer Builder test server at the address specified for `BASE_URL` above. Don't worry if the media files (images, CSS, JavaScript) are missing for the time being. The [wiki:Archive/CustomInstallerBuilderDeploymentWithModPython production deployment guide] will add the missing bits and pieces of configuration to rectify that. 41 | -------------------------------------------------------------------------------- /Archive/EyeCandy.md: -------------------------------------------------------------------------------- 1 | # Eye candy 2 | 3 | * Seattle/GENI website 4 | * Web back end improvements. 5 | 6 | # Overview 7 | 8 | This force is concerned with the GENI web portal. This portal consists of html/css/javascript code that executes in the browser and web back end code that runs on the server and interacts with the database as well as Seattle nodes. There are numerous improvements that need to be implemented and various extensions that we are going to work on throughout the quarter. 9 | 10 | There is a production server that runs the GENI portal, and test server that is used for development work. We will periodically push updates to the production server once the new code has been extensively tested and has been verified to work. 11 | 12 | ## Meetings 13 | 14 | Wednesday 10:30-11:30pm in CSE 314. 15 | 16 | # Coding Sprints 17 | * 1/22 (Sprint) 7PM, and 1/24 (Integration) 2-5 PM 18 | * Parallelism addition to resource acquisition/release in django 19 | * New design of the GENI portal 20 | 21 | # Next Meeting 22 | * Set up a coding sprint date and deadlines 23 | * Current django back-end overview and tutorial 24 | * Review of new portal design 25 | 26 | # Future 27 | 28 | * Testing django functionality using django's testing suite 29 | * Portal Ajaxification 30 | 31 | # Ajax API 32 | 33 | * All Ajax calls are POSTs 34 | 35 | 36 | # MyGENI Ajax calls 37 | 38 | * ajax_getcredits : returns list of credits 39 | * args {} 40 | * return : [{'username' : 'sean', percent : 16} ...] 41 | 42 | * ajax_getshares : returns list of shares 43 | * args {} 44 | * return : [{'username' : 'sean', percent : 16} ...] 45 | 46 | * ajax_editshare : modifies existing share 47 | * args : {username: "sean", percent : "17"} 48 | * return : {"success" : True/False, "error" : "" on success or string with explanation of error} 49 | * note: if percent is 0 then the action is to remove the share 50 | 51 | * ajax_createshare : creates a new share with some user 52 | * args : {username : "sean", percent : "17"} 53 | * return : {"success", "error"} just like above 54 | * note: percent must be > 0 (local js check as well as remote django check) 55 | 56 | * ajax_getvessels : gets more vessels/VMs 57 | * args : {numvessels : 16, env : 'LAN' or 'WAN' or 'Random'} 58 | * return : {"success" : True/False, "error" as above, "mypercent" : 50, "vessels" : [{"vesselid" : vid, "status" : status, "expiresin" : expirein}...]} 59 | 60 | -------------------------------------------------------------------------------- /Archive/InfectionAndRecurrence.md: -------------------------------------------------------------------------------- 1 | # Infection and Recurrence 2 | 3 | # Overview 4 | 5 | Our mission is to make it as simple as possible for us and the users to install seattle on their system, and to make sure that we can keep our installations running and up to date for as long as possible. 6 | 7 | ## Coding Sprint 8 | 9 | * January 17th 10 | 11 | ## Jan 10th - Jan 15th 12 | 13 | * Sal: Bring Justin's Python "deployment" program up to the standards specified in the code style guidelines and add unit tests. Ensure that information is being logged from the program running on remote systems. 14 | 15 | * Brent: Check to see if the state of the files that are on the system is normal and if not log enough information that we can understand the issue. 16 | 17 | * Cosmin: Check the state of the running processes and if something is abnormal, log enough information that we understand the problem. 18 | 19 | * Carter: Build scripts that completely uninstall a failed install or otherwise clean up as needed. 20 | 21 | ## Jan 20th - Jan 27th 22 | 23 | * Brent: Go through the softwareupdater and its tests, checking for correctness, and documenting both update procedures and testing procedures. 24 | 25 | * Carter: Go through the process of creating the base installers and preparing the code for release to the software updaters. Document the procedure on the wiki. 26 | 27 | * Cosmin: Examine the current geni code and look for areas to improve. Determine which areas of the code need testing. 28 | 29 | ## Interesting things to think about 30 | 31 | Many of the unit tests are timing based and so will fail on "slow" systems. Can we address this somehow? 32 | 33 | There are no repyportability / seattlelib unit tests. 34 | 35 | How do we prevent repy errors of the wrong type from having unit tests pass? 36 | 37 | Lots of extra files in the installer that don't need to be there. 38 | 39 | The installer builder / software updater builder has lots of unnecessary and distracting output 40 | 41 | A "config file" to get rid of repyconstants, hardcoding the version in nmmain, etc. 42 | 43 | Black box testing of installers / uninstallers 44 | 45 | Do we need a tarball that allows people to run Repy code without being a Seattle node? 46 | 47 | Clean up of installer 48 | 49 | Bug autopsies? 50 | -------------------------------------------------------------------------------- /Archive/Libraries/ExperimentLibrary.md: -------------------------------------------------------------------------------- 1 | # Seattle Experiment Library 2 | 3 | The Experiment Library provides a means of scripting interaction with individual nodes and VMs as well as with the Seattle Clearinghouse. Whereas [SeattleShell Seash] is for interactive use, the Experiment Library allows for writing Python scripts to acquire VMs, run experiments, download logs, etc. 4 | 5 | # Getting the Experiment Library 6 | 7 | You can obtain the experiment library by running the following commands: 8 | 9 | ``` 10 | mkdir experimentlibrary 11 | cd experimentlibrary 12 | svn export https://seattle.poly.edu/svn/seattle/trunk/experimentmanager 13 | svn export https://seattle.poly.edu/svn/seattle/trunk/seattlegeni/xmlrpc_clients/seattleclearinghouse_xmlrpc.py 14 | svn export https://seattle.poly.edu/svn/seattle/trunk/seattlelib 15 | svn export https://seattle.poly.edu/svn/seattle/trunk/portability 16 | svn export https://seattle.poly.edu/svn/seattle/trunk/repy 17 | rm -rf seattlelib/test seattlelib/tests portability/tests repy/apps repy/tests repy/winAPItests 18 | mv experimentmanager/* seattlelib/* portability/* repy/* ./ 19 | rmdir experimentmanager seattlelib portability repy 20 | touch servicelogger.py 21 | cd .. 22 | ``` 23 | 24 | Running the above commands will give you a directory named `experimentlibrary/` that contains the file `experimentlib.py` as well as all necessary supporting files. The file `experimentlib.py` is the Experiment Library that you will use. 25 | 26 | # Usage 27 | 28 | At the top of your own Python script add: 29 | 30 | ```python 31 | import sys 32 | sys.path.append("/path/to/experimentlibrary") 33 | import experimentlib 34 | ``` 35 | 36 | You can then use the Experiment Library in your script by calling the methods of the `experimentlib` module you imported. 37 | 38 | All of the public constants, variables, and functions in `experimentlib.py` are meant to be used by your script. The private identifiers (the ones beginning with an underscore) should not need to be used by your script. 39 | 40 | The comments at the top of `experimentlib.py` contain useful information about the data types and exceptions used by the Experiment Library. All of the public functions have comments to explain their usage. The examples/ directory that is in your experimentlibrary directory (if you performed the steps above) contains a handful of scripts that show different ways to use the Experiment Library. 41 | 42 | If you run into problems, find bugs, or feel that some useful functionality is missing, send an email to `seattle-devel -at- cs -dot- washington -dot- edu.` 43 | 44 | # Documentation 45 | 46 | The best documentation is just to look at [browser:seattle/trunk/experimentmanager/experimentlib.py the comments in experimentlib.py]. 47 | 48 | Second to that is to look at [browser:seattle/trunk/experimentmanager/examples/ the examples]. -------------------------------------------------------------------------------- /Archive/Lisping.md: -------------------------------------------------------------------------------- 1 | # Lisping 2 | 3 | * Porting MapReduce 4 | * Hadoop integration 5 | * integration with end applications 6 | 7 | # Overview 8 | 9 | The [octopy project](http://code.google.com/p/octopy/) aims to produce a very rudimentary implementation of MapReduce in python. Here is a [blog post](http://ebiquity.umbc.edu/blogger/2009/01/02/octopy-quick-and-easy-mapreduce-for-python/) with some more details of the project. Lisping will concentrate on porting this code to Seattle as a user program. The port should be easy to do as octopy is written in python and is a single file of about 700 lines of code. The next steps for Lisping is to work on numerous improvements to this code to make it more fault tolerant and more useful. The ideal is to reproduce a minimal Hadoop-like service in Seattle. 10 | 11 | # Work Completed 12 | 13 | * Replica started! 14 | 15 | ## Meetings 16 | * Wednesdays 11:30 - 12:30 in CSE 314 17 | 18 | ## Timeline 19 | 20 | Let's make things arbitrary due on the Thursday of each week to help speed me along! 21 | 22 | ### February 5th 23 | * Fully complete a primary - replica - primary map-reduce pipeline 24 | * Start with simplest case (single job, single reducer) 25 | * Forget about partitioner 26 | * Come up with complete protocol 27 | 28 | ### February 12th 29 | 30 | * Fully complete primary - replica-replica-replica-replica - primary map-reduce pipeline 31 | * Implement partitioning into replicas, involves the following 32 | * Asynchronous receiving of map data from other replicas 33 | * Ability for users to define their own hash for partitioning 34 | * Implement replica/client list for both primary and replica to keep tabs on each node 35 | 36 | ### February 19th 37 | 38 | * Fault-tolerance work 39 | * Enable polling of replica/primary state 40 | * Shore up protocol work, what differentiates a status request from a data transfer? 41 | * Primary/replicas have heartbeat, primary keeps track of node state 42 | * Preliminary fault-mitigation 43 | * Restart a job on another replica (keep pipeline linear, it can become more efficient later) 44 | 45 | ### February 26th 46 | 47 | * Fault-tolerance work 48 | * Improve upon past tasks 49 | 50 | ### March 5th 51 | 52 | * Finish up map-reduce implementation 53 | * Start coding a sample map-reduce pipeline to complete a unique task 54 | * Start documentation on how to use mapred.repy 55 | 56 | ### March 12th (Due Date) 57 | 58 | * Release on Seattle wiki, explanation of code 59 | * Release sample application of Beraber map-reduce 60 | 61 | -------------------------------------------------------------------------------- /Archive/Local/CentralizedAdvertiseService.md: -------------------------------------------------------------------------------- 1 | # Centralized Advertise Service 2 | 3 | The Centralized Advertise daemon implements a hash table abstraction in a centralized manner. This service is used by Seattle nodes in coordination with DOR and OpenDHT. This service runs on seattle.cs, which is also known as satya.cs 4 | 5 | ---- 6 | 7 | ---- 8 | 9 | ## Configuration 10 | ---- 11 | 12 | * The advertise server files reside on the machine seattle.poly.edu and the current version of the central advertise server is located in the directory /home/geni/advertiseserver_deployed. 13 | 14 | 15 | 16 | ## Deployment 17 | ---- 18 | 19 | There is a deployment script which backs up the current files in the folder advertiseserver_deployed/ and replaces all the files in that directory with the latest files from the trunk. In order to have the latest files, do an svn update and run the script deploy_advertiseserver.sh 20 | ``` 21 | geni@satya:~$ ./deploy_advertiseserver.sh 22 | ``` 23 | This will update all the files, but will not start up the advertise server. 24 | 25 | 26 | 27 | ## Starting/Stopping the Service 28 | ---- 29 | ### Starting Central Advertise Server 30 | To start the advertise server you must first be logged into seattle.poly.edu as the user: geni. Note, that you have to log in as 'geni' rather then sudo into the account. We will deploy the central advertise server in 'SCREEN' mode, that way even if we close the connection to the machine, the central advertise server will keep running. If there was a previous version of SCREEN already running we want to re-open it and then start up the advertise server. Follow the commands below: 31 | ``` 32 | geni@satya:~$ screen -r 33 | 34 | [Once in SCREEN mode] 35 | geni@satya:~/advertiseserver_deployed$ python repy.py restrictions.advertiseserver advertiseserver.repy > log.stdout 2> log.stderr 36 | ``` 37 | 38 | 39 | ### Stopping Central Advertise Server 40 | To stop the service, perform a **kill $PID** where $PID is the process id of the command above, which you can find using the following command: 41 | 42 | ``` 43 | $ ps auwx | grep advertise | grep -v grep 44 | ``` 45 | 46 | An alternative way to kill the central advertise server would be to reopen the SCREEN mode under the geni account and send a kill signal (ctrl-c), to the terminal that is running the central advertise server. -------------------------------------------------------------------------------- /Archive/Local/ContributorContactInfo.md: -------------------------------------------------------------------------------- 1 | # Contributor Contact Information 2 | **If you are working on Seattle, please add your information to this table.** Please include as much information as you are willing to disclose, and keep the list sorted alphabetically by SVN username. 3 | 4 | You can often find other Seattle contributors on IRC in room ```#seattle``` on ```freenode.net```. 5 | 6 | 7 | | **Name** | **Email** | **SVN Username** | **Trac Username** | **Phone** | 8 | | Justin Cappos | justinc at cs | justinc | justinc | | 9 | | Alex Hanson | alexjh at cs | alexjh | alexjh | | 10 | | Moshe Kaplan | mk.moshe.kaplan at gmail | mkaplan | mkaplan | | 11 | | Alan Loh | yaluen at uw | alanloh | alanloh | 425-273-5651 | 12 | | Sebastian Morgan | sebass63 at gmail | sebass63 | sebass63 | 206-498-5597 | 13 | | Monzur Muhammad | monzum at cs | monzum | monzum | | 14 | | Steven Portzer | sportzer at cs | sportzer | sportzer | | 15 | | Jeff Rasley | jeffra45 at cs | jeffra45 | jeffra45 | | 16 | | Sushant Bhadkamkar | sushant at nyu | sushant | sushant | | 17 | 18 | -------------------------------------------------------------------------------- /Archive/Local/SslRenewal.md: -------------------------------------------------------------------------------- 1 | # Renewing SSL Certificates 2 | 3 | 4 | 5 | ## Introduction 6 | This page describes some of the steps that need to be taken in order to renew the SSL certificates for the various machines and servers that we have running for the Seattle projects. We have acquired our certificates for our server from [GoDaddy.com](http://www.godaddy.com/). We have two main servers running for the Seattle project and they will need to be renewed when their expiration date approaches: 7 | * [Seattle](http://seattle.poly.edu) 8 | * [Seattle Clearinghouse](http://seattleclearinghouse.poly.edu) 9 | 10 | In order to check the current expiration date of the server, go to the site and click on the SSL verified blob right next to the address bar (might be green or blue color). Once you click on it a little dialog should pop up. Click on 'More Information' and another menu should pop up. Under the 'Security' section, click on 'View Certificate' and it should show you the SSL certificate with the expiration date. 11 | 12 | ## Renewing SSL Certificate 13 | If you do not have access to the GoDaddy account then inform someone who does. If you can log into the account then follow this [Renewal Link](http://help.godaddy.com/article/864) and follow the direction. 14 | 15 | 16 | 17 | 18 | ## Installing the renwed SSL Certificate 19 | Once you have requested for the renewal of the SSL certificate and it has been approved, then you need to download the new certificates and install them. You can follow the instructions on the [Download Link](http://help.godaddy.com/article/4754) on how to download the certificates. This will download a zip file with two files: the certificate for the server and an intermediate certificate. Both of this are necessary. 20 | 21 | You can follow the instructions on [Installation Page](http://help.godaddy.com/topic/752/article/5347) on how to install the new certificates. You should copy over both the server certificate and the intermediate certificate to the /etc/apache2/ssl directory. **Before you copy over the files, make sure to backup the old certificates.** You shouldn't need to modify the conf file if you renamed the certificates to the old certificate name, as it should be already configured properly. If you do need to modify it then the file resides in /etc/apache2/sites-available/default. 22 | 23 | You should then gracefully restart the apache server by running the command: 24 | ``` 25 | /usr/sbin/apache2ctl graceful 26 | ``` -------------------------------------------------------------------------------- /Archive/Local/SvnService.md: -------------------------------------------------------------------------------- 1 | # Seattle SVN Repository 2 | 3 | Our project uses SVN for our version control. This page describes our configuration. 4 | 5 | ---- 6 | 7 | ---- 8 | 9 | ## Configuration 10 | ---- 11 | 12 | * Our SVN is installed in /var/local/svn/ 13 | * We do not use SVN built-in groups/users for authentication 14 | * Our SVN allows read-only anonymous access via HTTP (to everything in the SVN except for the /seattle/trunk/assignments directory) 15 | * Anyone who has SSH access to seattle.cs is automatically granted read-write access to the entire repository 16 | * We do ''not'' run the svnserve daemon (which speaks the svn:// protocol) 17 | 18 | 19 | 20 | ## SVN Hooks 21 | ---- 22 | 23 | SVN hooks allow one to run arbitrary scripts whenever some action is taken by a user. We have a single SVN hook: post-commit hook file which runs whenever a user successfully commit to the SVN. 24 | 25 | This hook is located here in our repository: /trunk/svn-hooks/post-commit 26 | 27 | The hook is deployed/installed here: /var/local/svn/hooks/post-commit 28 | 29 | This hook is used to run the unit-testing suite over the new version of the repository, and generates an email if any of the unit-tests fail for some reason. -------------------------------------------------------------------------------- /Archive/Local/WikiFormatting.md: -------------------------------------------------------------------------------- 1 | # Documentation Style Guidelines 2 | 3 | This document described how to edit wiki pages so that they have a uniform look and feel. This is especially important for longer pages. 4 | 5 | 6 | 7 | 8 | 9 | ## Article structure and contents 10 | ---- 11 | 12 | * Begin the page with a heading, enclosed with: ```= =``` 13 | * The immediate next paragraph of the document should contain a purpose summary for the document 14 | * Immediately following the purpose summary, insert the following snippet: 15 | ``` 16 | 17 | ``` 18 | * All section titles in the rest of the document must use nested heading tags: ```== ==```, ```=== ===```, and so on. 19 | * Each section title must be **preceded** by a line break ``` 20 | ``` and immediately followed by a horizontal line ```----```. Here's an example: 21 | ``` 22 | 23 | 24 | ## Sending pings between a group of computers 25 | ---- 26 | ``` 27 | 28 | * Python/Repy code and Seash terminal interaction examples must be enclosed with: ``````python `````` or ``````repy ``````. Here's an example: 29 | ``` 30 | ```python 31 | print "hello world" 32 | ``` 33 | ``` 34 | 35 | * For adding HTML code the ```#!html``` directive should be used 36 | 37 | * For CSS code, use the ```#C``` directive 38 | 39 | 40 | 41 | == Linking to other articles within the wiki == #links 42 | ---- 43 | 44 | To prevent broken links due to renamed articles, all links to other articles within the wiki should be made explicitly by using the "```wiki:```" prefix. 45 | 46 | For example, do: 47 | 48 | ...use one of these... 49 | 50 | ``` 51 | [wiki:ProgrammersPage] 52 | [wiki:ProgrammersPage Programmer Portal] 53 | ``` 54 | 55 | **Do not do** 56 | 57 | ``` 58 | BadName 59 | [BadName] 60 | [BadName This is a bad example] 61 | ``` 62 | 63 | 64 | 65 | Because the name of an article may change unexpectedly, the wiki: style is preferred. -------------------------------------------------------------------------------- /Archive/ManagingSprints.md: -------------------------------------------------------------------------------- 1 | # Managing Sprints 2 | 3 | The [milestones.txt file](http://seattle.poly.edu/svn/seattle/trunk/milestones.txt) located in /seattle/trunk/milestones.txt maintains sprint information for Seattle strike forces. 4 | 5 | ## milestones.txt format 6 | 7 | The milestones.txt file is composed of sprint snippets that are separated by newlines and have the following format: 8 | ``` 9 | :sprint 10 | strikeforce 11 | date 12 | name1 task1 13 | name2 task2 14 | ``` 15 | 16 | Where: 17 | * strikeforce is the name of the strike force 18 | * date is in mm/dd/yyyy format 19 | * names are person names of those involved in the sprint 20 | * tasks are short descriptions of the task corresponding to each person 21 | 22 | You can add comments to the file, and possibly comment out old sprints by using the '#' symbol. **Note: right now the parser ignores all lines containing #, not just lines that start with #.** 23 | 24 | 25 | At the end of your development cycle for the sprint, your should pre-append an svn revision number that contains all your committed files to the line with your name. For example, change this line: 26 | ``` 27 | name1 task1 28 | ``` 29 | to this line (if your svn revision is rev1): 30 | ``` 31 | rev1 name1 task1 32 | ``` 33 | 34 | ## Example 35 | 36 | Here is an example of a completed sprint snippet: 37 | ``` 38 | :sprint 39 | 01/01/2009 40 | eye candy 41 | 807 alper 42 | 810 sean 43 | ``` -------------------------------------------------------------------------------- /Archive/MeasureTwiceCutOnce.md: -------------------------------------------------------------------------------- 1 | # Measure Twice, Cut Once 2 | * 3 | # Overview 4 | 5 | 6 | ## Actually Get 10% 7 | 8 | Note: Consider benchmarking the speed at which CPU usage can be sampled. This would enable more precise throttling on faster computers. 9 | 10 | Coding Sprint: Jan 31st 11 | 12 | Platforms Assignments: 13 | 14 | Windows Mobile -- Armon 15 | 16 | Windows Vista / XP -- Brent 17 | 18 | Linux -- Anthony 19 | 20 | Mac / BSD -- Carter 21 | 22 | 23 | Jan 10th - Jan 17th 24 | 25 | All: Correctly measure at least one type of resource on your platform 26 | 27 | 28 | Jan 17th - Jan 24nd 29 | 30 | All: Have tried all resources and have questions about any problems 31 | 32 | 33 | Jan 24th - Jan 29th 34 | 35 | All: Have all resources except NW bandwidth (and optionally one other) working 36 | 37 | Integrate with installer... -------------------------------------------------------------------------------- /Archive/MicroMachines.md: -------------------------------------------------------------------------------- 1 | # Micro-Machines 2 | * Portability to mobile devices 3 | * Location services 4 | * MobileASL project support 5 | 6 | ## Unit Tests on Windows Mobile 7 | 8 | * Unit tests use subprocess so I do a basic check if MobileCE and switched to using launchPythonScript from windows_api 9 | * run_tests.py imports repy_constants and uses that to run unit tests instead of looking in current directory. 10 | 11 | ## Coding Sprints 12 | * January 31st at 1:00 13 | 14 | ## January 31st Milestone 15 | * Carter: Make sure the Windows installer is portable to Mobile -------------------------------------------------------------------------------- /Archive/PotentialSeattleLibs.md: -------------------------------------------------------------------------------- 1 | # Potential Seattle Services 2 | ---- 3 | 4 | What follows is a variety of cooperative P2P libraries and services that may be implemented on top of Seattle. These are lightweight enough to be supported by nodes running Seattle nodes, and which can provide non-trivial and useful functionality to developers of distributed systems. As long as the node runs Seattle, these services are available locally to the developer 5 | 6 | ## WAN P2P environment 7 | ---- 8 | * Detour routing 9 | A failure detection service that provides higher routing reliability during outages and routing mis-configurations. 10 | 11 | * Unique forms of file transfer 12 | BitTorrent swarming to provide efficient distribution of large files among multiple nodes. 13 | 14 | * Multi-flow transfers 15 | Stream data between hosts over multiple UDP channels to reduce loss rates (ala Skype) 16 | 17 | * Tor routing 18 | Anonymous routing via the Tor network. 19 | 20 | * NAT traversal 21 | UDP and TCP whole punching to access nodes behind middle boxes. 22 | 23 | * Structured Streams 24 | Lightweight TCP tunneling over UDP 25 | 26 | * Peer selection 27 | Select peers based on their network properties, such as latency and throughput using iPlane nano. 28 | 29 | * Hidden service advertisement 30 | Use Tor's hidden service feature to advertise a service without disclosing its location 31 | 32 | * Mobility / Tracking support 33 | Find a host as it changes IP addresses 34 | 35 | 36 | ## LAN/WAN environment 37 | ---- 38 | * Computation 39 | Facilitate large scale distributed computation tasks using MapReduce. 40 | 41 | * Consistency primitives 42 | Take advantage of Paxos, two and three phase commit distributed algorithms to organize nodes and synchronize distributed state machines. 43 | 44 | * Transactions support 45 | Use transactional semantics of BEGIN/ROLLBACK/COMMIT and some number of operations in between. With file logging for durability. 46 | 47 | * IPv6 support 48 | Transparently support IPv6 traffic 49 | 50 | -------------------------------------------------------------------------------- /Archive/ProjectNames.md: -------------------------------------------------------------------------------- 1 | Add suggestions for project names here: 2 | 3 | pebbles - Conveys a large collection of autonomous objects. Also conveys sturdiness/hardiness. Cute. --- Lots of other projects / software with this name 4 | 5 | global mesh 6 | 7 | NetHive - hive reminds me of autonomous ants, and Net conveys the scale of the project --- Google Scholar Name conflicts for Hive. 8 | 9 | Metropolis - the urban agglomeration nature of this type of "city" stands for the cooperation of many smaller entities/peers. a metropolis is also "an important hub for regional or international connections and communications" (Yih Sun) --- Google Scholar Name conflicts 10 | 11 | Wampum -- a string of creamy white colored shell beads fashioned from the North Atlantic channeled whelk (Busycotypus canaliculatus) shell, and is traditionally used by Indigenous Americans, First Nations peoples, Native Americans, hobbyists, business people, and traders, who regarded it as a sacred or trade representative of the value of the artist's work. (essentially, it's a sacred weaving of a large number of beads to form a beautiful and valuable object) --- Google Scholar Safe 12 | 13 | Tilikum -- Means friend in Chinook Jargon, usually spelled Tillicum and also meaning "people/tribe" or "kin". Emphasizes a community effort and cooperation. --- Google Scholar Safe 14 | 15 | KarmaLab -- You can earn good Karma through contributions to the Lab. --- Google Scholar Safe 16 | -------------------------------------------------------------------------------- /Archive/ResearchAdvice.md: -------------------------------------------------------------------------------- 1 | ## Advice About Research 2 | 3 | ''This is research advice given by Geremy Condra. I've placed it here to help others benefit from his experiences.'' 4 | 5 | 6 | Just submitted my first conference paper- yay!- and found out a lot of things 7 | along the way. Justin asked me to write down a few of the most important, 8 | and so without further ado, here they are: 9 | 10 | 1. Gathering data is hard- it takes more time than you think it will, and more 11 | will go wrong than you plan for. This felt very strange to me, because the 12 | things I was doing weren't individually hard, and as a result I felt the urge 13 | to automate them, leading me to conclude that... 14 | 15 | 2. Premature automation is the root of all evil- Einstein said "It wouldn't be 16 | called 'research' if we knew what we were doing". Trying to automate a process 17 | you don't fully understand- or which will change as you gain a better 18 | understanding- is a fool's errand. Save yourself the hassle, because... 19 | 20 | 3. Systems are not equations- understanding their behavior is about getting your 21 | hands on real data, not on perfectly understanding their theoretical properties. 22 | By definition you don't understand what impact the tottering pile of hardware 23 | and software underneath you is having on your research, which means that... 24 | 25 | 4. You have to interrogate your data- At the end of the process you and your 26 | data will be good friends. At the beginning of the process you should treat it 27 | like a murder suspect: you don't know what it was doing, why it was doing it, or 28 | if it's lying to you about all of the above, but you're pretty sure it's bad 29 | anyways. I knew this objectively going in, but in retrospect I didn't 30 | act on that 31 | knowledge as well as I needed to, at least in part because I didn't... 32 | 33 | 5. Get the questions right first - I didn't have a good feel for the 34 | questions I 35 | was trying to answer walking into the paper, and so trying to get answers for 36 | the right questions became very confusing (and very time consuming) in a big 37 | hurry. In retrospect, it would have saved everybody a lot of time if I 38 | had tried 39 | to more carefully understand what the reviewers wanted to know before I leaped 40 | into gathering data. 41 | 42 | So, I hope this helps somebody out there to avoid my mistakes. Good 43 | luck on your future research, everybody! -------------------------------------------------------------------------------- /Archive/SleightOfHand.md: -------------------------------------------------------------------------------- 1 | # Sleight Of Hand 2 | 3 | * NAT traversal 4 | * IPv6 5 | * DHCP support 6 | 7 | # Overview 8 | 9 | This team focuses on peer-to-peer communication with the server (data multiplexing/demultiplexing and nameserver). Its main goal is to implement the software on Windows Mobile for communicating the nameserver and the other phone as well as the software on the data server for NAT traversal (simply data forwarding in the server). There are two components to the software (server and client) running on the virtual machine available in the handheld device. (Please look at the first attachment.) 10 | 11 | The most challenging part is to make the protocol between phones, data server and nameserver. Implementation will be divided to client on the phone, server on the phone and data server. Below is the detail assignment to the person each. 12 | 13 | * Making the protocol and then its implementation on the phone and the data server 14 | * Data multiplexing/demultiplexing (simply data forwarding) over a channel on the data server 15 | * Find the data server and update the location on the phone 16 | 17 | [[Image(NAT_traversal.jpg)]] 18 | 19 | # Meetings 20 | 21 | Tuesdays 3:30-4:00pm. 22 | 23 | ## Milestone 1 24 | 25 | Working NAT layer. 26 | 27 | Dennis: look up / advertisement for forwarder and server. 28 | 29 | Armon: protocol for multiplexing connections. 30 | 31 | Eric: forwarder logic. 32 | 33 | 34 | 35 | ## Milestone 1 cleanup 36 | 37 | Dennis: Allow connections on different ports, intelligently choose from multiple forwarders, error conditions for current code. 38 | 39 | Armon: Separate out the connection multiplexing code (ask Richard), ??MAC addresses??, error messages should be the same as openconn / waitforconn. 40 | 41 | Eric: Forwarder with separate sockets, limits on the forwarder, sleeps go away. -------------------------------------------------------------------------------- /Archive/Speciesism.md: -------------------------------------------------------------------------------- 1 | # Speciesism 2 | 3 | 4 | 5 | ## Overview 6 | 7 | The primary goal of this work is to ensure that nodes can only send packets to opt-in machines. This is important because nodes could otherwise be used to generate SPAM, participate in BitTorrent swarms with illegal content, or participate in DDOS attacks. There are also several secondary goals: 8 | 9 | * It is important that there is no hidden delay in allowing or denying traffic as this may interfere with measurements or performance of applications on nodes. 10 | * The implementation must be secure against most forms of attack. For example, it should not be possible for a machine to opt-in another machine. 11 | * The system must scale to over one million nodes (the expected size of Seattle). 12 | 13 | Our solution is to use a set of trusted central servers to host a service to manage addresses of machines that have opted in to receiving Seattle traffic. We call this service the Seattle Node Directory Service. The opt-in machines are responsible for registering (and periodically re-registering) to continue receiving Seattle traffic. In order to prevent spoofing, when a machine wishes to register with the Seattle Node Directory Service, it must first do so through TCP. In response to a registration the Seattle Node Directory Service generates a random renewal key and sends it to the registering machine. The machine can then use this key to renew its registration via UDP. This is done to improve performance, as TCP is expensive and a TCP-only registration system will not scale adequately. 14 | 15 | Seattle nodes will make decisions about what traffic to allow or deny locally. To do this, nodes cache the relevant parts of the node directory. In most cases, two VMs will communicate only if they share one or more user or owner keys (i.e. if they host VMs that have an overlapping set of users). However, exceptions such as VMs that provide public services may also be handled. 16 | 17 | If a program attempts to send a packet to an address not in the node's cache, the node sends a packet to the directory service asking if the address is valid. If the directory service replies positively, the address is added to the node's cache. If the cache becomes full, the least recently used entry is evicted to make room for the new entry. 18 | 19 | To maintain the node's cache of addresses as current as possible, the Seattle Node Directory Service sends out update packets describing changes to the list of addresses registered for each user key. The update mechanism uses a gossip protocol, with each node forwarding the update packet to a random set of other nodes. For each user key the server will generate a separate update packet containing address changes relevant to just that user key. Update packets may contain multiple changes, and are sent at short intervals (less than 1 minute apart) to ensure that changes are disseminated quickly. 20 | 21 | 22 | ## Feedback 23 | [wiki:Outdated/NatIntegration Integration with NAT Nodes] 24 | 25 | 26 | -------------------------------------------------------------------------------- /Archive/StrikeForce.md: -------------------------------------------------------------------------------- 1 | # Strike Force 2 | 3 | **''strike force (n.) -- an armed force, usually small in size, that is equipped to deliver a strong offensive.**'' 4 | 5 | Rather than dividing tasks by individual, we will set a small team on each task and have them focus their efforts. Project members will belong to one or more teams and will be responsible for helping those teams to meet their goals. There will be weekly coding sprints (Saturdays) where Justin and Ivan will be available to try and push the teams to meet their deadlines. There would likely be ~2 teams that would sprint toward different goals on the same day and Justin and Ivan will work alternatively with them as needed. 6 | 7 | The requirements are: 8 | 1. All the team members must meet for at least 2 hours on whatever day the team decides on among themselves (likely Saturday). Its all or nothing I think. 9 | 10 | 2. We have to meet in the Seattle lab -- if we can't use it for this then its useless to us -- we have to actively improve any deficiencies it might have 11 | 12 | 3. The entire time in the meetings is spent on Seattle work 13 | 14 | 4. Teams must keep the MilestonesFile in the svn repository up to date 15 | 16 | 17 | 18 | # Winter'09 teams 19 | ---- 20 | 21 | |**Strike Force**|**Members**|**Purpose**| 22 | 23 | |[wiki:Archive/SleightOfHand Sleight of hand]|Armon, Dennis, Eric|NAT traversal, IPv6, and DHCP support| 24 | 25 | |[wiki:Archive/MicroMachines Micro-machines]|Armon, Carter, Mitchell|Portability to mobile devices, location services, and MobileASL project support| 26 | 27 | |[wiki:Archive/EyeCandy Eye candy]|Sean, Alper|Seattle/GENI website, and web back end improvements| 28 | 29 | |[wiki:Archive/InfectionAndRecurrence Infection and Recurrence]|Brent, Carter, Cosmin, Sal|Deployment onto nodes, black box testing / repeated testing, and software updates| 30 | 31 | |[wiki:Archive/Indoctrination]|Alper, Sal, Eric, Jenn|Educational support, auto-grading, and student support| 32 | 33 | |[wiki:Archive/TopSecret Top Secret]|Michael, Andreas, Richard|TCP over UDP flow control, anonymous donation, and other interactions with Tor project| 34 | 35 | |[wiki:Archive/BotMaster Bot Master]|Brent, Sean|Reading remote logs, "service interface" on nodes, backup software update mechanism| 36 | 37 | |[wiki:Archive/MeasureTwiceCutOnce Measure twice, cut once]|Brent, Armon, Carter, Anthony|Benchmarking service, actually getting 10%, and dynamic resource re-allocation| 38 | 39 | |[wiki:Archive/LipstickOnAPig Lipstick on a pig]|Andreas, Mitchell|Output improvements, exception handling, and seattle lib clean-up| 40 | 41 | |[wiki:2+2= 2 + 2 = ?]|Anthony *****|Better RSA implementation| 42 | 43 | |[wiki:Archive/Lisping]|Alper *****|Porting MapReduce, Hadoop integration, and possible integration with end applications| 44 | 45 | |[wiki:Archive/Speciesism]|Cosmin *****|Optional isolation of traffic to within Seattle nodes, possible replacement for OpenDHT, and simple service composition| 46 | 47 | ***** Not accepting new members -------------------------------------------------------------------------------- /Archive/TOOLS/extract_wiki_contents.py: -------------------------------------------------------------------------------- 1 | """ 2 | Extract wiki contents from a Trac database, and `pickle` parts 3 | of them for later usage. 4 | 5 | Thanks to https://gist.github.com/sgk/1286682 for the code! 6 | """ 7 | import sqlite3 8 | import pickle 9 | 10 | SQL = ''' 11 | select 12 | name, version, time, author, text, comment 13 | from 14 | wiki w 15 | ''' 16 | 17 | conn = sqlite3.connect('trac.db') 18 | result = conn.execute(SQL) 19 | outfile = open("trac_wiki_dump.pickle", "wb") 20 | outlist = [] 21 | for line in result: 22 | outlist.append(line) 23 | 24 | pickle.dump(outlist, outfile) 25 | outfile.close() 26 | 27 | -------------------------------------------------------------------------------- /Archive/TwoPlusTwo.md: -------------------------------------------------------------------------------- 1 | # 2 + 2 = ? 2 | * 3 | # Overview 4 | 5 | 6 | ## RSA implementation 7 | 8 | 9 | Coding Sprint: Feb 14th 10 | 11 | 12 | Jan 10th - Jan 17th 13 | 14 | Build working mod arithmetic for any numbers. 15 | 16 | Jan 17th - Jan 24th 17 | 18 | Learn how to "pack" / "unpack" binary data into numbers. (if we can use openSSH / openSSL keys then it's worth the time to code it!!!) 19 | 20 | Jan 24th - Jan 31st 21 | 22 | Have primality testing working 23 | 24 | Jan 31st - Feb 7th 25 | 26 | Key generation working 27 | 28 | Feb 7th - Feb 12th 29 | 30 | Wrap up and programmer interface (key conversion) 31 | 32 | ### Mar 17th - Mar 24 33 | 34 | Port of pycrypto RSA to repy is nearly complete. Appropriate interface and documentation is still needed. 35 | 36 | Implement RSA key representation and storage. 37 | Evaluate and begin port of OSRNG and Fortuna PRNG. 38 | Consider PKCS 1 RSAES-OAEP encryption scheme. 39 | 40 | -------------------------------------------------------------------------------- /Archive/Venues.md: -------------------------------------------------------------------------------- 1 | This page lists potential conferences and workshops that might be relevant to the Beraber project. Maintained sorted by Deadline. 2 | 3 | |Genre|Venue|Full Name|Deadline|Location|Event Dates| 4 | 5 | | Python | [Northwest Python Day](http://seapig.org/NorthwestPythonDay) | Northwest Python Day | Jan 15, 2009 | UW | Jan 31st | 6 | | Linux, Open Source | [LinuxFest NW 2009](http://www.linuxfestnorthwest.org/) | LinuxFest Northwest | N/A | Bellingham, WA | Apr 25, 26 | 7 | | Systems | [SOSP](http://www.sigops.org/sosp/sosp09/) | Symposium on Operating Systems Principles | March 2, 2009 | Big Sky, MT | | 8 | | Virtualization | [SIGCOMM VISA Workshop](http://conferences.sigcomm.org/sigcomm/2009/workshops/visa/) | Virtualized Infastructure Systems and Architectures | March 6, 2009 | Barcelona | | 9 | | Mobile Systems | [SIGCOMM MobiHand Workshop](http://conferences.sigcomm.org/sigcomm/2009/workshops/mobihand/) | Networking, Systems, Applications on Mobile Handhelds | March 24, 2009 | Barcelona | | 10 | | Open Source | [Open Source Bridge](http://opensourcebridge.org/) | Open Source Bridge | March 31, 2009 | Portland, OR | June 17 – June 19 | 11 | | Python | [EuroPython 2009](http://www.europython.eu/) | EuroPython | April 5th, 2009 | Birmingham, UK | June 28th - 3rd July, 2009 | 12 | | Mobile Systems | [MobiCASE Conference](http://mobicase.org/) | Mobile Computing, Applications, and Services | May 1, 2009 | San Diego | 13 | | CS Education | [SIGCSE Conference](http://www.cs.arizona.edu/groups/sigcse09/) | Technical Symposium on Computer Science Education | | Chattanooga, TN | | 14 | | CS Education | [SIGCSE DISC Workshop](http://www.cloudera.com/sigcse-2009-disc-workshop) | Workshop on Data Intensive Scalable Computing | | Chattanooga, TN | 15 | | Systems, Databases | [NetDB'09 Workshop](http://netdb09.cis.upenn.edu/) | Fifth International Workshop on Networking Meets Databases | June 1, 2009 | Big Sky, MT with SOSP'09 | October 14, 2009 | 16 | | Systems, Programming Languages | [PLOS'09](http://plosworkshop.org/2009/cfp.shtml) | 5th Workshop on Programming Languages and Operating Systems | June 19, 2009 | Big Sky, MT | October 11, 2009 | -------------------------------------------------------------------------------- /Contributing/Contributors.md: -------------------------------------------------------------------------------- 1 | # Contributors 2 | 3 | 4 | ## Core Software 5 | Repy sandbox – Justin Cappos 6 | * Improved Windows API, CPU Throttling fix – Armon Dadger 7 | * Improved testing – Brent Couvrette 8 | 9 | Node Manager – Justin Cappos 10 | * Improved testing – Cosmin Barsan 11 | 12 | Seash – Justin Cappos 13 | 14 | Software updater – Justin Cappos, Brent Couvrette 15 | 16 | 17 | ## Web interfaces 18 | Seattle website and wiki – Ivan Beschastnikh 19 | 20 | SeattleGENI website 21 | * Backend – Ivan Beschastnikh, Justin Cappos 22 | * Frontend – Peter Lipay, Sean Ren, Ivan Beschastnikh 23 | 24 | Installers – Carter Butaud 25 | 26 | Custom Installer Builder 27 | * Version 1 28 | * Backend – Carter Butaud, Ivan Beschastnikh 29 | * Frontend – Sean Ren, Peter Lipay 30 | * Version 2 – Alex Hanson 31 | 32 | 33 | ## Documentation 34 | Repy tutorials – Kyungil Kim, Andreas Sekine, Justin Cappos 35 | 36 | Repy API – Justin Cappos, Jeff Flatten 37 | 38 | 39 | ## Useful add-ons 40 | Testing framework – Brent Couvrette 41 | 42 | 43 | ## External Contributors 44 | Patch to fix race conditions in installer – Jake Appelbaum 45 | 46 | VIM syntax coloring for Repy – João Moreno 47 | -------------------------------------------------------------------------------- /Contributing/README.md: -------------------------------------------------------------------------------- 1 | # Getting Started 2 | 3 | This page has useful information for prospective contributors to the Seattle 4 | Testbed project. 5 | 6 | # Welcome 7 | Thank you for your interest in joining the Seattle Testbed project. To bring 8 | you quickly up to speed to being an active contributor, we ask that you do 9 | the following: 10 | 11 | 1. Start by reading the [Seattle docs](../README.md). This will give you a 12 | better sense of what Seattle is and how its structured, and contains pointers 13 | to the programming documentation as well. The [Contributor's Page](ContributorsPage.md) 14 | has even more reading material and links. 15 | 16 | 2. Join our [mailing list](https://groups.google.com/forum/?hl=en#!forum/seattle-devel)! 17 | This is where you can find help and look for interesting news relating to 18 | Seattle Testbed. 19 | 20 | 3. Pick a few issues from [our repositories](https://github.com/SeattleTestbed) 21 | to work on. You should look for issues with the "Newcomer" label. Before you 22 | start working on an issue, reassign it to your GitHub account so that other 23 | newcomers don't end up duplicating your work. Comment on any findings or 24 | insights that you may have, and also post any concerns or questions that arise. 25 | 26 | 4. When you are done with your issue, make sure your code adheres to our 27 | [code style guidelines](WebCodingStyle.md). Have someone else on 28 | the team review your code. When they give you the OK, send a pull request. 29 | If your code is good, someone will merge in your changes. 30 | 31 | 5. After completing the above, contact the mailing list so that you can join 32 | some of the regular weekly meetings. You should talk to some of the other 33 | contributors to help you find the projects that interest you the most. 34 | 35 | 36 | == Stuck? == 37 | We are willing to provide you with assistance, but you must first demonstrate 38 | to us that you have at least attempted to solve the problem. Questions should 39 | be concise and to-the-point. Take a look at Eric Raymond's 40 | ["How To Ask Questions The Smart Way"](http://www.catb.org/esr/faqs/smart-questions.html). 41 | Read this before asking questions. 42 | 43 | If you are still stuck, try asking on our Google group. Feel free to post 44 | a new topic if your question isn't already addressed in an existing topic 45 | (but follow the rules above about asking questions the smart way). You can 46 | also stop by Justin's lab at RH 221 to interact with other students working on 47 | Seattle and its related projects. 48 | -------------------------------------------------------------------------------- /EducationalAssignments/ChatServer.md: -------------------------------------------------------------------------------- 1 | # Chat Server Assignment 2 | ---- 3 | This assignment focuses on designing and implementing a chat service called Seattlechat. Seattlechat has three main components, a central Seattlechat server whose focus is to relay messages, a collection of Seattlechat translators that change messages into different formats for display, and a Seattlechat client which is will use a standard web browser for communicating with a user. Each Seattlechat client has its own translator, but many translators can connect to the same Seattlechat server. 4 | 5 | ---- 6 | 7 | ---- 8 | 9 | 10 | 11 | ## Step 1 : Seattlechat Server 12 | ---- 13 | 14 | Your Seattlechat server should accept connections on your GENI port using the waitforconn() call. Seattlechat is a multiway text conferencing system, and so the server must be able to accept and manage multiple connections simultaneously. Once two or more sources are connected, all bytes sent to the server are “relayed” to all other computers listening to the service. The server should separate text by source by sending it line by line (lines end with ' 15 | n'), labeling each line with the name of the client who sourced that particular message. 16 | 17 | 18 | 19 | ## Step 2 : Seattlechat Translator 20 | ---- 21 | 22 | The Seattlechat translator connects to a Seattlechat server and translates messages for the client. In all cases, the Seattlechat translator provides a web page that has the chat output and accepts incoming chat messages (in the form of HTTP POST messages). There are three translators you will need to implement: 23 | 24 | * normal: This translator does not change incoming or outgoing data. 25 | 26 | * reverse: This translator reverses the order of all incoming and outgoing data streams (hint: data[-1] is the reverse of data). Names and other data should not be reversed. Note that input typed by a user behind the reverse translator will be backwards! 27 | 28 | * Pig Latin: This translator changes words to and from [Pig Latin](http://en.wikipedia.org/wiki/Pig_Latin). Your Pig Latin translator should use the hyphenated form and be able to reverse translate data from the client which is in hyphenated form. 29 | 30 | 31 | 32 | 33 | ## Step 3 : Chat web-browser client 34 | ---- 35 | In this step, you will build a chat client which will use a web-browser to communicate with the user. The client must interact with a translator instead of contacting the server directly. 36 | 37 | 38 | 39 | ## Strategy 40 | ---- 41 | 42 | To test your code, we suggest that you first build the server and test it using a simple program that opens connections and sends strings. Following this, build the normal translator using the webserver from the previous assignment to pass messages and display output. After this works, add the reverse and Pig Latin translators. -------------------------------------------------------------------------------- /EducationalAssignments/SecureTuringCompleteSandboxAttack.md: -------------------------------------------------------------------------------- 1 | 2 | # Secure Turing Complete Sandbox Challenge -- Attacking Sandboxes 3 | 4 | The second part of the Secure Turing Complete Sandbox Challenge is to try to break the security of candidate sandboxes. While not all bugs may be exploitable, it is worth documenting any problems that may arise in case another attacker can use them to find a way to escape sandbox containment. 5 | 6 | 7 | 8 | 9 | 10 | 11 | ## Acquiring Sandboxes 12 | ---- 13 | 14 | There should be a series of publicly available sandboxes including the ones attached to this wiki page. Other links will be provided here to those sandboxes. 15 | 16 | 17 | 18 | 19 | ## Hint: Look For Corner Cases 20 | ---- 21 | 22 | Many flaws deal with incorrect handling of corner cases. Try to find any situations where the behavior of the sandbox is not well specified. Any bugs you find (whether exploitable or not) should be noted. 23 | 24 | 25 | 26 | 27 | ## What To Turn In 28 | ---- 29 | 30 | You must turn in the URL and SHA1 hash of the head of your GitHub project. The project must contain the following: 31 | 32 | 1. For each sandbox you were asked to analyze, explain at a high level what type of technique they used. This should be 1-2 sentences. For example: The easytocode.py sandbox is written in Python. It filters input programs (written in a subset of Python) for disallowed strings, replaces the module / built-ins namespace, and execs the code. Please provide this in PDF format. 33 | 1. Any test files that crash the sandbox in unexpected ways. The test program must have documentation explaining how the crash is generated. Please turn in a separate file per unique class of bug found. (In other words, if a program crashes when given negative numbers or floating point numbers, turn in a test case with -1 and one with 1.0. Don't turn in separate cases for -1, -2, -3, ...) If the source code format does not allow for comments, please add these in a separate file called README.testname. 34 | 1. Any test files that allow escape of the sandbox, including performing actions that were not allowed. As before, please turn in separate files per unique class of bug and also explain how exploits are performed. 35 | 1. A summary file (called summary.pdf) that contains an overview of the contents of the submission. This should say which tests go with each sandbox and also whether the tests demonstrate a crash or sandbox escape. 36 | 37 | -------------------------------------------------------------------------------- /Grants.md: -------------------------------------------------------------------------------- 1 | # Grants and Endowment 2 | 3 | This material is based upon work supported by the National Science Foundation 4 | under Grant No.s (0834243, 1205415, and 1223588). Any opinions, findings, and 5 | conclusions or recommendations expressed in this material are those of the 6 | author(s) and do not necessarily reflect the views of GPO Technologies, Corp., 7 | the GENI Project Office, or the National Science Foundation. 8 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 NYU Tandon School of Engineering 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Operating/BaseInstallers.md: -------------------------------------------------------------------------------- 1 | # Building Base Installers 2 | 3 | The base installer building and packaging information for RepyV2 is available 4 | from the [`installer-packaging` repo's README](https://github.com/SeattleTestbed/installer-packaging/blob/master/README.md) 5 | and the [`custominstallerbuilder` setup docs](CustomInstallerBuilder/Installation.md#building-base-installers). 6 | -------------------------------------------------------------------------------- /Operating/BuildDemokit.md: -------------------------------------------------------------------------------- 1 | The demokit packages up a set of tools for Seattle users that allow you to 2 | * Access VMs on remote machines, for example such assigned to you by the [Seattle Clearinghouse](https://seattleclearinghouse.poly.edu) 3 | * Run RepyV2 code on your local machine, using a supplied restrictions file, and thusly 4 | * Develop and debug Repy programs locally. 5 | 6 | 7 | # Building the Demokit 8 | 9 | Building the demokit is as simple as 10 | * Cloning the [demokit repo on GitHub](https://github.com/SeattleTestbed/demokit), 11 | * Running `scripts/initialize.py` inside of it, and lastly 12 | * Creating an empty target directory, and lastly 13 | * Changing directory into `scripts/dist` in order to run `python build.py` with the full path to the target dir as an argument. 14 | 15 | See the wiki:BuildInstructions for details about using Seattle's build scripts. 16 | 17 | ----- 18 | 19 | Note for developers: The current demokit build scripts do not yet 20 | * Copy over `seash` modules to the target directory, nor 21 | * Create a tarball out of the target dir. Neither will it 22 | * Supply a copy of demo apps to run (as these are currently being ported to RepyV2). 23 | -------------------------------------------------------------------------------- /Operating/Clearinghouse/DatabaseSetup.md: -------------------------------------------------------------------------------- 1 | # Seattle Clearinghouse Database Service 2 | 3 | Seattle Clearinghouse uses the MySQL database, and InnoDB table types. 4 | 5 | * For info on installing/configuring the database, creating the necessary tables, etc, see [wiki:Archive/SeattleGeniInstallation]. 6 | 7 | ---- 8 | 9 | ---- 10 | 11 | ## Configuration 12 | ---- 13 | 14 | The configuration file for MySQL is /etc/mysql/my.cnf 15 | 16 | For example, this file defines the directory where MySQL tables live on disk: /var/lib/mysql 17 | 18 | 19 | 20 | ## Starting/Stopping/Restarting MySQL 21 | ---- 22 | 23 | Log into seattleclearinghouse.poly.edu as root or run the following commands as sudo to start/stop/restart mysql: 24 | 25 | ``` 26 | $ /etc/init.d/mysql start 27 | $ /etc/init.d/mysql stop 28 | $ /etc/init.d/mysql restart 29 | ``` 30 | 31 | 32 | 33 | ## Making sure that MySQL is running 34 | ---- 35 | 36 | To test if MySQL is running, run the following command and make sure that you can see /usr/sbin/mysqld in the process list: 37 | 38 | {{{ 39 | $ ps auwx | grep mysqld | grep -v grep -------------------------------------------------------------------------------- /Operating/Clearinghouse/Overview.md: -------------------------------------------------------------------------------- 1 | # How SeattleGENI Works 2 | 3 | This page describes how the clearinghouse (SeattleGENI) is constructed using the node manager API. It heavily utilizes the [wiki:UnderstandingSeattle/NodeManagerDesign#NodeManagerInterface Node Manager API], so it is important to understand this document before reading further. 4 | 5 | We first describe how the specific tasks are performed by SeattleGENI and then describe the key use overall. 6 | 7 | ## Acquiring and releasing resources 8 | 9 | The basic idea behind SeattleGENI controlling resources is that it possesses the owner private key for all donated resources. When it provides resources to a user, it adds their user key to the VM. When a user releases resources, their key is removed from the user list. A ResetVessel operation is also performed on the VM to clear any state. 10 | 11 | ## Finding donations 12 | 13 | The SeattleGENI server needs to be able to find new donations. To do this, the installer that SeattleGENI provides on behalf of the user has a special user key inside that indicates the node is in the 'donation state'. This user key has no corresponding private key and is only used by SeattleGENI to look up new donations (i.e. nodes in the 'donation state'). 14 | 15 | ## Attributing donations to the correct user 16 | 17 | Each donation's VM has a donation key that is unique to the donating user. When the SeattleGENI site configures the VM, it creates a new, per node owner key and creates a database entry linking the node ID and donating user. 18 | 19 | ## Finding nodes 20 | 21 | After a node is correctly configured a VM on the node is given a special user key that corresponds to the 'ready state'. SeattleGENI looks up this key and then in a local database marks any nodes that it can contact as ready (which means the VMs are eligible to be acquired by interested users). 22 | 23 | 24 | ## Overall 25 | 26 | When an installer is downloaded from SeattleGENI, the owner key is a private key that is specific to the donating user (but is different than the user's private key they use to access resources). The user key is the special 'donation state' key. The SeattleGENI server runs a script that looks for nodes in the 'donation state' and then changes the VM owner to be a newly generated, per-node key, while linking the node ID and donating user in a local database. SeattleGENI then splits the VM into pieces of the correct size and changes the user key on the leftover resources to be the 'ready state' key. Another script finds nodes in the 'ready state' and then marks them as available in a local database (incrementing the number of credits for the donating user). When a user clicks to acquire resources on a node, SeattleGENI looks up the VMs owner's private key, and then changes the user of the VM to include the requesting user. When a user releases a vessel (or it expires), the user's key is removed from the vessel and the VM is reset. -------------------------------------------------------------------------------- /Operating/Clearinghouse/SocialAuth.md: -------------------------------------------------------------------------------- 1 | # Using Social Auth for Clearinghouse 2 | In addition to its own user management, Clearinghouse can optionally handle OpenID and Oauth. This page describes how to set that up. 3 | 4 | ---- 5 | 6 | ## Setup OpenID and OAuth 7 | For OpenID and OAuth, Clearinghouse requires [Django Social Auth](https://github.com/omab/django-social-auth). This application depends on: 8 | * [python-openid](http://pypi.python.org/pypi/python-openid/) 9 | * [python-oauth2](https://github.com/simplegeo/python-oauth2/) 10 | 11 | Using something like `easy_install` will install these for you. 12 | ```sh 13 | $ easy_install django-social-auth 14 | ``` 15 | 16 | 17 | By default Gmail and Yahoo login are enabled. If desired Windows Live, Github and Facebook login can be enabled with some additional steps. 18 | 19 | **Facebook** 20 | * Register a new application at [Facebook App Creation](http://developers.facebook.com/setup/) 21 | * set App Domains in Facebook edit App page 22 | ``` 23 | yoursite.com 24 | ``` 25 | * click the Website with Facebook Login checkmark and set site URL 26 | ``` 27 | https://yoursite.com 28 | ``` 29 | * Uncomment and fill out ```App ID``` and ```App Secret``` values in settings.py 30 | ```python 31 | FACEBOOK_APP_ID = ' your appid' 32 | FACEBOOK_API_SECRET = ' your api secret key' 33 | ``` 34 | 35 | 36 | **Windows Live** 37 | * Register a new application at [Live Connect Developer Center](https://manage.dev.live.com/Applications/Index) 38 | * Set redirect domain 39 | ``` 40 | https://yoursite.com 41 | ``` 42 | * Uncomment and fill out ```LIVE_CLIENT_ID``` and ```LIVE_CLIENT_SECRET``` values in settings.py 43 | ```python 44 | LIVE_CLIENT_ID = ' your appid' 45 | LIVE_CLIENT_SECRET = ' your api secret key' 46 | ``` 47 | 48 | 49 | 50 | **Github** 51 | * Register a new application at [Live GitHub Developers](https://github.com/settings/applications/new) 52 | * Set URL and callback URL 53 | ``` 54 | https://yoursite.com 55 | ``` 56 | * Uncomment and fill out ```GITHUB_APP_ID``` and ```GITHUB_API_SECRET``` values in settings.py 57 | ```python 58 | GITHUB_APP_ID = ' your appid' 59 | GITHUB_API_SECRET = ' your api secret key' 60 | ``` 61 | 62 | 63 | 64 | ## Updating an existing Clearinghouse 65 | 66 | If you already have a working copy of the Clearinghouse and you are updating to allow OpenID and OAuth support you will need to add the Django Social Auth db tables. This is done automatically by: 67 | ```sh 68 | $ python website/manage.py syncdb 69 | ``` 70 | -------------------------------------------------------------------------------- /Operating/Clearinghouse/StartupScripts.md: -------------------------------------------------------------------------------- 1 | # Seattle Clearinghouse Start up Scripts 2 | This page describes which scripts need to automatically start on system boot and how to set them up. 3 | 4 | 5 | 6 | 7 | ## Overview 8 | These scripts must be running **at all times** on both the production and beta Clearinghouses. They should be started as user root in a **screen** by running 9 | seattlegeni 10 | deploymentscripts 11 | start_seattlegeni_components.sh which will automatically start the needed components. 12 | * lockserver_daemon.py 13 | * backend_daemon.py 14 | * check_active_db_nodes.py 15 | * transition_donation_to_canonical 16 | * transition_canonical_to_twopercent 17 | * transition_twopercent_to_twopercent 18 | * transition_onepercentmanyevents_to_canonical 19 | 20 | Currently we handle automatically starting these scripts on system boot with a cron job under root. The following is the cron job for the beta Clearinghouse. 21 | ```@reboot screen -S betaseattleclearinghouse -d -m /home/geni/start_seattlegeni_components.sh``` 22 | This can be set by 23 | ```$ sudo crontab -e``` 24 | and entering the following line in the text editor. Be sure to enter the proper values for ```clearinghouse_username``` and ```/path/to/start_seattlegeni_components.sh```. 25 | ```@reboot screen -S clearinghouse_username -d -m /path/to/start_seattlegeni_components.sh``` 26 | -------------------------------------------------------------------------------- /Operating/Clearinghouse/XMLRPCServer.md: -------------------------------------------------------------------------------- 1 | # Production Seattle Clearinghouse XMLRPC Service 2 | 3 | The Seattle Clearinghouse XMLRPC service is available at https://seattleclearinghouse.poly.edu/xmlrpc 4 | 5 | ## Setup Notes 6 | 7 | The plain-http server runs locally on port 9001. 8 | 9 | It is made available through the secure url with the addition of the following to SSL VirtualHost in /etc/apache2/mods-enabled/000-default 10 | 11 | ``` 12 | 13 | RewriteEngine on 14 | RewriteRule .* http://localhost:9001/ [P,L] 15 | 16 | ``` 17 | 18 | Note that for the proxy to work, the apache modules ```proxy``` and ```proxy_http``` had to be enabled, which on this debian-based system was done like this: 19 | 20 | ``` 21 | sudo a2enmod proxy 22 | sudo a2enmod proxy_http 23 | ``` 24 | -------------------------------------------------------------------------------- /Operating/CustomInstallerBuilder/Usage.md: -------------------------------------------------------------------------------- 1 | # Using the Custom Installer Builder 2 | 3 | Normally, installers created through Seattle Clearinghouse subdivide a donor's resources into two VMs. As always, there is a small VM (20%) reserved for Seattle itself. The main VM (80%) is owned by the experiment planner, and a special key is entered for the VM user. This special key allows anybody to download a customized installer (provided through Seattle Clearinghouse) and donate their computing resources to the planner's experiment. 4 | 5 | If the experiment planner wants to further subdivide the VMs, he can use the Custom Installer Builder (e.g. [Seattle's](https://custombuilder.poly.edu/custom_install/) or [Sensibility Testbed's](https://sensibilityclearinghouse.poly.edu/custominstallerbuilder/)). This tool allows the planner to create several smaller VMs in place of the larger main VM. Each VM can be assigned a particular owner and a number of users. Because working with cryptographic keys is awkward, the Custom Installer Builder can accept public keys from files, or generate new key pairs entirely. 6 | 7 | On the backend, there is also an [XML-RPC interface](API.md) to create customized installers. (In fact, this is how Seattle Clearinghouse generates the default installers it provides.) 8 | 9 | 10 | 11 | # A Tour of the Web Interface 12 | ![cib.png](../../ATTACHMENTS/CustomInstallerBuilder/cib-update.png) 13 | 14 | Start by opening [Seattle's](https://custombuilder.poly.edu/custom_install/) or [Sensibility Testbed's](https://sensibilityclearinghouse.poly.edu/custominstallerbuilder/) Custom Installer Builder in your web browser. 15 | 16 | ## Step 1 : Build Installers 17 | First, **create users** in the top half of the page. You can upload a public key for existing users, or allow the Custom Installer Builder to generate cryptographic keys for you. 18 | 19 | Next, **create your VMs** by splitting the main VM as you see fit. VMs can be split by clicking one of the "+" icons along the top edge. You can delete a created VM by clicking the "×" icon in its upper-right corner. 20 | 21 | Finally, **configure your VMs** by dragging users from the top half of the page into the VMs created in the previous step. Each VM must have a single owner, but may have any number of users. When you are ready, **press the "Build" button** to move to the next step. 22 | 23 | ## Step 2 : Download keys 24 | Before you can download your installers, you must download the cryptographic keys for the users created in the previous step. These keys are not stored on our server after your browsing session expires. 25 | 26 | ## Step 3: Download installers 27 | After you have downloaded the cryptographic keys, you are ready to download your installers! A link is provided to share these installers with others. 28 | -------------------------------------------------------------------------------- /Outdated/BundlingSeattle.md: -------------------------------------------------------------------------------- 1 | Suppose that an application that you are developing requires some calculation intensive task that you'd like to run on the Seattle network. In this case, you can bundle (i.e. include) Seattle alongside your application and deploy it on user machines. This is trivial to do on Windows, OSX and Linux machines. 2 | 3 | 4 | 5 | == Requirements == 6 | ---- 7 | In order for Seattle to function, you need to make sure that Python 2.5 or Python 2.6 is installed on your target system. The Windows installer has Python 2.5 bundled alongside with it, so manual installation of Python is not needed. 8 | 9 | 10 | 11 | == Bundling Seattle == 12 | ---- 13 | 14 | Download the [Seattle installer](https://seattleclearinghouse.poly.edu/html/getdonations) for your target operating system under your Clearinghouse account. This will provide your Clearinghouse account with donation credits for each installation. If you do not use installers from your account, then your Clearinghouse account will not be credited with these donations. 15 | 16 | You then take the installer from the previous step and include it alongside your application's installer. This step depends largely on your application's installer, refer to your installer's manual for instructions. From this point onwards, you follow a standard Seattle installation procedure. Seattle will automatically start itself when the user's machine reboots. 17 | 18 | To monitor if Seattle is running, you can check for ```python nmmain.py``` and ```python softwareupdater.py``` via ```ps``` on Unix-like machines. nmmain.py is the node manager that interacts with Seattle and also manages the VMs that will run on your target system. softwareupdater.py will periodically check for the latest version of Seattle, and update your target systems as necessary. 19 | -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions.md: -------------------------------------------------------------------------------- 1 | # Future Repy Exception Hierarchy 2 | This is the planned exception hierarchy for RepyV2, currently in development. 3 | 4 | * [wiki:FutureRepyExceptions/InternalRepyError InternalRepyError] 5 | * [wiki:FutureRepyExceptions/RepyError RepyError] 6 | * [wiki:FutureRepyExceptions/NetworkError NetworkError] 7 | * [wiki:FutureRepyExceptions/PortInUseError PortInUseError] 8 | * [wiki:FutureRepyExceptions/InternetConnectivityError InternetConnectivityError] 9 | * [wiki:FutureRepyExceptions/AddressBindingError AddressBindingError] 10 | * [wiki:FutureRepyExceptions/ConnectionRefusedError ConnectionRefusedError] 11 | * [wiki:FutureRepyExceptions/LocalIPChanged LocalIPChanged] 12 | * [wiki:FutureRepyExceptions/SocketClosedLocal SocketClosedLocal] 13 | * [wiki:FutureRepyExceptions/SocketClosedRemote SocketClosedRemote] 14 | * [wiki:FutureRepyExceptions/SocketWouldBlockError SocketWouldBlockError] 15 | * [wiki:FutureRepyExceptions/RestrictionError RestrictionError] 16 | * [wiki:FutureRepyExceptions/PortRestrictedError PortRestrictedError] 17 | * [wiki:FutureRepyExceptions/CodeUnsafeError CodeUnsafeError] 18 | * [wiki:FutureRepyExceptions/ContextUnsafeError ContextUnsafeError] 19 | * [wiki:FutureRepyExceptions/TimeoutError TimeoutError] 20 | * [wiki:FutureRepyExceptions/FileError FileError] 21 | * [wiki:FutureRepyExceptions/FileNotFoundError FileNotFoundError] 22 | * [wiki:FutureRepyExceptions/SeekPastEndOfFileError SeekPastEndOfFileError] 23 | * [wiki:FutureRepyExceptions/FileInUseError FileInUseError] 24 | * [wiki:FutureRepyExceptions/LockDoubleReleaseError LockDoubleReleaseError] 25 | * [wiki:FutureRepyExceptions/RepyArgumentError RepyArgumentError] 26 | * [wiki:FutureRepyExceptions/ResourceUsageError ResourceUsageError] 27 | * [wiki:FutureRepyExceptions/ResourceExhaustedError ResourceExhaustedError] 28 | * [wiki:FutureRepyExceptions/ResourceForbiddenError ResourceForbiddenError] -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/AddressBindingError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, AddressBindingError is an error that occurs when a low level bind() call fails. It is distinguished from PortInUseException in that the cause isn't something else using the service (perhaps a low-numbered (reserved) port?). 2 | 3 | Conrad: I'm not sure this distinction is valuable; perhaps removing PortInUseException is a good idea. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/CodeunsafeError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, CodeUnsafeError is raised when `createvirtualnamespace()` is called and the code is deemed unsafe after static analysis. It is also raised if any of the run-time safety checks fail during `virtualnamespace.evaluate()`. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/ConnectionRefusedError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, ConnectionRefusedError is thrown when the remote host rejects a TCP connection. 2 | 3 | Conrad: I believe UDP has something similar (icmp-reject-something), and if so, it's included in this exception. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/FileError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, FileError encompasses the collection of errors related to files in repy. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/FileInUseError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, FileInUseError is raised when a repy program attempts to call `removefile()` or `openfile()` on a file that is already open. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/FileNotFoundError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, FileNotFoundError is raised when a repy program attempts to call `openfile()` with `create` set to `False`, and the file does not exist. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/InternetConnectivityError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, InternetConnectivityError is an exception raised when the repy program tries to connect to a remote tcp ip/port or send a datagram to a remote udp ip/port, and there is no route to that ip (for example, if the user's laptop gets disconnected from an access point because they are on a moving bus). -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/LocalIPChanged.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, [wiki:FutureRepyExceptions/LocalIPChanged] occurs when the repy program calls `getconnection()` or `getmessage()`. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/LockDoubleReleaseError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, LockDoubleReleaseError is raised when `.release()` is called on a lock object that is already unlocked. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/NetworkError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, NetworkError is a category of exceptions raised when problems related to internet connectivity failures: when a remote host is no longer available, when a socket is closed (on either end), when the local host is disconnected from the internet, or changes IPs. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/PortInUseError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, PortInUseError is a network exception raised when another program is using a port that a repy program wants to bind to. It is distinguished from AddressBindingError in that it is expected to happen and user programs should handle it. 2 | 3 | Conrad: I'm not sure if AddressBindingError should be a subclass of this, or vice versa, or if they should remain distinct. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/PortRestrictedError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, PortRestrictedError occurs when repy programs attempt to bind to ports that are restricted. This can be because the local port is invalid when connecting to a remote host; because the remote port is invalid when connecting to a remote host; or because the local port is invalid when listening for incoming connections. (In the previous, "connecting" can be replaced with "sending messages" and "connection" with "messages" for UDP datagrams.) 2 | 3 | Conrad: We don't care about the remote port when they are connecting to us, right? -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/RepyArgumentError.md: -------------------------------------------------------------------------------- 1 | RepyArgumentError is an exception meaning to convey similar meaning to Python's TypeError / ValueError. That is, there is something wrong with how the calling code is using this function or method, and it is not a fault in the function or method itself. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/RepyError.md: -------------------------------------------------------------------------------- 1 | RepyError is a parent exception for all exceptions raised by the API. It should **not** be invoked directly, but instead subclassed first. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/RestrictionError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, RestrictionError occurs when a repy program calls a function or method that is either restricted, or has restricted arguments. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/SeekPastEndOfFileError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, SeekPastEndOfFileError is raised when the `offset` parameter to `readat()` or `writeat()` is beyond the end of the file. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/SocketClosedLocal.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, SocketClosedLocal occurs when a socket is closed, and a user tries calling: 2 | 3 | * `socket.recv()` when there is no more data to be read 4 | * or `socket.send()` -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/SocketClosedRemote.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, SocketClosedRemote is identical to SocketClosedLocal, except that it was the remote end who closed the socket. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/SocketWouldBlockError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, SocketWouldBlockError is raised when a socket operation -- namely, `recv()`, `acceptconnection()`, or the UDP equivalent of `recv()` -- would block. This signals to the repy program that no data is available for reading at this time; instead the program should continue to poll until data is available, or do something else in the meantime. -------------------------------------------------------------------------------- /Outdated/FutureRepyExceptions/TimeoutError.md: -------------------------------------------------------------------------------- 1 | In the FutureRepyExceptionHierarchy, there are many functions and methods that support (optionally) the ability to terminate themselves when continuing would mean exceeding some specified maximum run-time. In these cases, TimeoutError is raised. 2 | 3 | Some of these include: 4 | * `openDHTadvertise_announce()` 5 | * `openDHTadvertise_lookup()` 6 | * `DORadvertise_announce()` 7 | * `DORadvertise_lookup()` 8 | * `centralized_announce()` 9 | * `centralized_lookup()` 10 | * many others... -------------------------------------------------------------------------------- /Outdated/GettingStartedWithAffix.md: -------------------------------------------------------------------------------- 1 | AR: My vision of how a Getting Started guide for Affix should look like. Improvements welcome (it's a wiki)! 2 | ---- 3 | Here's a rough outline: 4 | * Application developer's perspective 5 | * Using a specific Affix stack, e.g. for testing 6 | * Using Coordination, the preferred way 7 | * Affix developer's perspective 8 | * Writing a bare-minimum Affix, with examples for both UDP and TCP 9 | * Writing a full-fledged advertising etc. Affix 10 | * Bag of tricks: keeping/sharing state between flows, selective inheritance from lower and to higher stack layers, ... -------------------------------------------------------------------------------- /Outdated/RunningSecLayerBenchmarks.md: -------------------------------------------------------------------------------- 1 | # Running the Security Layer Benchmarks 2 | 3 | ## Setup 4 | First it is necessary to setup a folder to run the benchmarks. This can be done like so: 5 | 6 | ``` 7 | cd $SEATTLE/branches/repy_v2/ 8 | mkdir bench/ 9 | python preparetest.py bench 10 | cp benchmarking-support/* bench/ 11 | ``` 12 | 13 | ## Running the Benchmarks 14 | Each type of benchmark has it's own script to initiate the benchmarks. 15 | * To run the basic overhead tests, invoke ./benchmark.sh 16 | * To run the allpairsping test, invoke ./benchmark-allpairs.sh 17 | * To run the richards test, invoke ./benchmark-richards.sh 18 | * To run the webserver tests, run ./benchmark-webserver.sh and then ./benchmark-webserver-meg.sh 19 | * To run the blocking storage server tests, run ./benchmark-blockstore.py 20 | 21 | Each benchmark file has some configurable settings that can be edited. For example, the number and type of security layers to benchmark with may be changed. This is done by changing the constants in the bash files. 22 | 23 | All benchmarks can be found at: [browser:seattle/branches/repy_v2/benchmarking-support/ benchmarking-support]. 24 | 25 | ## Instructions for benchmark-blockstore.py 26 | Change the arguments of blockstore.py to the prefix of your public/private keys and a valid port number (e.g, 12345). 27 | For example, if your keys are my_name.publickey and my_name.privatekey, the first argument would be my_name. 28 | -------------------------------------------------------------------------------- /Outdated/ShimExceptionHierarchy.md: -------------------------------------------------------------------------------- 1 | # Shim Exception Hierarchy 2 | 3 | ## Description 4 | ---- 5 | This page displays the shim exception hierarchy. It lists all the expected shim exceptions that may be raised and a description of what they are. 6 | [[br]] 7 | 8 | ## Exception Hierarchy 9 | ---- 10 | Exception hierarchy: 11 | * ShimException 12 | * ShimStackError 13 | * ShimConfigError 14 | * ShimArgumentError 15 | * ShimNotFoundError 16 | * ShimInternalError 17 | [[br]] 18 | 19 | ## Exception Description 20 | ---- 21 | class **ShimException** (Exception): 22 | The base shim exception. All other exceptions 23 | are derived from here. 24 | 25 | 26 | class **ShimStackError** (ShimStackError): 27 | This error will usually be raised if we are unable 28 | to manipulate the shim stack. For example if we 29 | are trying to do a pop() or peek() on an empty stack. 30 | 31 | 32 | class **ShimConfigError** (ShimException): 33 | This error means that the shim library could not 34 | properly configure the shims. 35 | 36 | 37 | class **ShimArgumentError** (ShimException): 38 | This indicates that an argument was provided 39 | that does not match the expected argument for 40 | a function. 41 | 42 | 43 | class **ShimNotFoundError** (ShimException): 44 | This error would be raised if a shim name 45 | is provided that is not found. 46 | 47 | 48 | class **ShimInternalError** (ShimException): 49 | This error is raised if an error occurs while 50 | configuring the shims. 51 | -------------------------------------------------------------------------------- /Outdated/UpdaterUnitTests.md: -------------------------------------------------------------------------------- 1 | # Running Software Updater Unit Tests 2 | 3 | The softwareupdater tests start a local webserver and serve update files from there. 4 | 5 | ---- 6 | 7 | ---- 8 | 9 | 10 | 11 | ## Running the tests 12 | * Linux/Mac/BSD: 13 | * cd to the same directory preparetest.py is in (that is, trunk) 14 | * run `./softwareupdater/test/run_local_tests.sh name_of_directory_to_put_tests_in` 15 | * For example, create a temp directory and pass that as the only argument to the script. 16 | * Windows: 17 | * Run preparetest.py to a folder of your choosing. 18 | * Copy over the files from trunk/softwareupdater/test/ to that same folder. 19 | * Go to that directory and run `python utf.py -m softwareupdaters` 20 | * When running these tests on Windows, ps cannot be used to check process status. You will have to do this yourself in the task manager. 21 | 22 | ## Notes 23 | * There cannot be another instance of softwareupdater.py running, or the restart tests will fail. 24 | 25 | ## Output 26 | 27 | Output if the test passed is one line indicating whether the test passed or failed (this test will take a while). 28 | 29 | If the test fails, output will be produced, telling you what went wrong in which part of the test. 30 | 31 | If everything is successful, there will be an instance of softwareupdater.py and nmmain.py running when the script completes. It is non-trivial to clean these up automatically, because we do not directly start these processes. 32 | 33 | Note: Actually, it's not that hard to clean them up (it can be done with process groups). The current scripts don't do it, though, so you'll end up with extra nmmain and software updater process running at the end. -------------------------------------------------------------------------------- /Programming/PortingPythonToRepy.md: -------------------------------------------------------------------------------- 1 | # Porting Guide 2 | 3 | Porting existing code into REPY. 4 | 5 | 6 | 7 | ### Removing imports 8 | ---- 9 | 10 | Since importing is not permitted in REPY it will be necessary to eventually remove any ```import``` statements and use an **include** statement instead. Since for large files this may be difficult to attempt all at once, it can be approached in several stages. 11 | 12 | #### Stage 1 13 | Given a simple example: 14 | 15 | ``` 16 | from foo import * 17 | 18 | squid(x) 19 | ``` 20 | 21 | Or the more specific import: 22 | 23 | ``` 24 | from foo import squid 25 | 26 | squid(x) 27 | ``` 28 | 29 | 30 | The specified names from foo will be imported (except for those starting with an underscore when using ```import *```). The first stage in transitioning to using an include statement will consist of importing the module name into the importing module's symbol table. 31 | 32 | ``` 33 | import foo 34 | 35 | foo.squid(x) 36 | ``` 37 | 38 | Now we use the module name to access the desired functions. 39 | 40 | #### Stage 2 41 | 42 | To complete our transition, we will ```include``` the module instead of importing it. The [/wiki/PythonVsRepy#Importstatements Import statements guide] has more information on the functionality of the includes feature. To avoid collisions in the name space, an appropriate naming convention will be needed since the contents of the foo module will be inlined into the module using the ```include``` statement. 43 | 44 | ``` 45 | include foo.repy 46 | 47 | foo_squid(x) 48 | ``` 49 | 50 | In this example we have used foo and an underscore as a convention to help avoid a collision. 51 | 52 | -------------------------------------------------------------------------------- /Programming/RepyNetworkRestrictions.md: -------------------------------------------------------------------------------- 1 | # Repy Network Restrictions (IP's and Interfaces) 2 | 3 | ---- 4 | 5 | ---- 6 | 7 | 8 | 9 | ## Default Behavior 10 | ---- 11 | Repy's behavior can be broken down into the semantics of getmyip, and the other network calls (recvmess/sendmess/openconn/waitforconn). By default getmyip will attempt to connect to an external address, and report the local IP that was binded to the socket. This generally means that getmyip will return the OS default IP. The other repy network calls will allow any localip to be specified, but they may fail to bind if the IP does not exist. 12 | 13 | 14 | 15 | ## The nootherips flag 16 | ---- 17 | Repy's default behavior with respect to allowing any local ip can be controlled with the use of the "--nootherips" flag. This flag disables repy's acceptance of 'implicit' IP's, and enforces a strict white list policy for the localip specified to networking calls. If the IP passed to the calls is not allowed through the IP flag, or does not belong to a interface which is allowed, then an exception will be raised to inform the developer. If no local IP is specified, the network calls will fall back onto getmyip to get a localip to bind to. It should be noted that the --nootherips flag always allows the loopback address 127.0.0.1. This IP cannot be denied. 18 | 19 | 20 | 21 | ## The IP flag 22 | ---- 23 | Repy supports specifying preferred / allowed IP's through the use of this flag. IP's will be considered 'preferred' in the order they are specified. If an IP or interface is specified, its associated IP will be returned by getmyip, overriding the default behavior. 24 | 25 | 26 | 27 | ## The Iface flag 28 | ---- 29 | Repy supports specifying preferred / allowed interfaces through the use of this flag. Interfaces will be considered 'preferred' in the order they are specified. If an IP or interface is specified, its associated IP will be returned by getmyip, overriding the default behavior. Interfaces that experience frequent IP changes may be problematic, since the allowed IP cache is updated only on calls to getmyip(). This is a known bug, but is usually a non-issue since DHCP tends to renew existing IP's rather than assign new ones. 30 | 31 | 32 | 33 | ## Unit Tests 34 | ---- 35 | See [UnitTests this page] for more information. Essentially there are unit tests but they are not part of the standard ones run by run_test.py. They must be run separately through the use of the "-network" flag. 36 | 37 | 38 | 39 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/AdvertiseObjects.repy.md: -------------------------------------------------------------------------------- 1 | # AdvertiseObjects.repy 2 | 3 | Provides two objects which makes advertising more efficient. Can be used as an alternative to [wiki:Advertise.repy]. 4 | 5 | LookupCache caches lookup results to increase performance in programs which calls lookup services frequently. 6 | 7 | AdvertisePipe stores a list of (key, value) tuples and advertises them concurrently in a single thread. 8 | 9 | # Functions 10 | 11 | LookupCache 12 | ``` 13 | lookup(self, key, maxvals=100, lookuptype=['central','opendht','DOR'], concurrentevents=2, graceperiod=10, timeout=60) 14 | ``` 15 | Exactly the same as [advertise.repy](advertise.repy.md). 16 | Note: 17 | 18 | * self refers to its own cache. 19 | 20 | AdvertisePipe 21 | ``` 22 | add(self, key, value) 23 | ``` 24 | Adds a (key, value) pair to the advertise pipe. 25 | Note: 26 | 27 | * self refers to its own cache. 28 | * Each value added is given an unique handle, which is used in remove. 29 | 30 | ``` 31 | remove(self, handle) 32 | ``` 33 | Removes a (key, value) pair from the advertise pipe. 34 | Note: 35 | 36 | * handle is an unique identifier for each value in the advertise pipe. 37 | * self refers to its own cache 38 | 39 | # Usage 40 | 41 | no examples? 42 | 43 | # Includes 44 | [advertise.repy](advertise.repy.md) 45 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/ConcurrencyAndParallelism.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | The parallelism services offered by Seattle are purely for the client's needs and not required in any way. Running parallel processes are more efficient and may be of interest to some users, but adds a certain amount of overhead since locking must be implemented to prevent crashes and errors in critical sections of code. For this reason, libraries like [wiki:SeattleLib/semaphore.repy] and [wiki:SeattleLib/cv.repy] to help the client implement locks. 4 | 5 | Many modules here also have Python equivalents. These are linked appropriately. 6 | 7 | [Back to SeattleLibWiki](../) 8 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/Cryptography.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | Seattle supports various cryptography choices. Since Seattle includes a lot of network communication, encrypting data is very important. This prevents 3rd party interference. 4 | 5 | Many of the modules in this category are ported directly from Python equivalents, so it may be useful to view the documentation for those pages instead. All of those will be appropriately linked in their own sections. 6 | 7 | Note also a few modules are very experimental in nature and may not have full functionality. All of these discrepancies will be noted as well per section. 8 | 9 | [Back to SeattleLibWiki](../) 10 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/DORadvertise.repy.md: -------------------------------------------------------------------------------- 1 | # DORadvertise.repy 2 | Another method of advertising nodes in the Seattle library. This module advertises to the Digital Object Registry run by CNRI. 3 | ### Functions 4 | ``` 5 | DORadvertise_announce(key, value, ttlval, timeout=None) 6 | ``` 7 | Adds a (key, value) pair into the DOR. 8 | Notes: 9 | 10 | * ttlval describes the length of time in seconds the tuple exists in the DOR. 11 | * Exceptions are raised if there are errors within the XML-RPC client. 12 | * timeout is the number of seconds spent before the process quits. 13 | 14 | ``` 15 | DORadvertise_lookup(key, maxvals=100, timeout=None) 16 | ``` 17 | Looks up a stored value under the key in the DOR. 18 | Notes: 19 | 20 | * maxvals is the maximum number of values returned. 21 | * timeout is the number of seconds spent before the process quits. 22 | 23 | ### Usage 24 | 25 | ???couldn't find any? 26 | 27 | ### Include 28 | [SeattleLib/sockettimeout.repy](sockettimeout.repy.md) 29 | 30 | [SeattleLib/httpretrieve.repy](httpretrieve.repy.md) 31 | 32 | [SeattleLib/xmlparse.repy](xmlparse.repy.md) 33 | 34 | [Back to NodeAdvertising](NodeAdvertising.md) 35 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/DataEncoding.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | These services are used to protect transmission of data within the network. Encryption is also necessary in order to allow for serialization of the data, which is essential for data transmission as well. Programmers should take care to encrypt all their packets. 4 | 5 | The Seattle Standard Library supports two different types of encoding methods: binary to ascii ([binascii.repy](binascii.repy.md)) and base64 ([base64.repy](base64.repy.md)). The serialization module ([serialize.repy](serialize.repy.md)) allows for serialization and deserialization of various data types. 6 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/DataRetrieval.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | All the modules in category deal with data retrieval of some form. Since Seattle is a P2P based client, its often necessary to know about a specific node: where it is, what is its IP address, etc. In addition to [domainnameinfo.repy](domainnameinfo.repy.md), which simply returns the country of origin, Seattle also provides the [geoip_client.repy](geoip_client.repy.md), which returns specific location information about the client, etc. 4 | 5 | Seattle also provides a basic abstraction to the HTTP protocol, see [httpserver.repy](httpserver.repy.md). 6 | 7 | [Back to SeattleLibWiki](../) 8 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/NAT_advertisement.repy.md: -------------------------------------------------------------------------------- 1 | # NAT_advertisement.repy 2 | 3 | Abstracts the task of looking up and advertising servers and forwarders. Allows for those using NAT_layer to advertise. Utilizes [advertise.repy](advertise.repy.md) to achieve this. 4 | 5 | ### Functions 6 | 7 | ``` 8 | nat_forwarder_advertise(ip, serverport, clientport) 9 | ``` 10 | Registers the forwarder. 11 | 12 | ``` 13 | nat_server_advertise(key, forwarderIP, forwarderCltPort) 14 | ``` 15 | Advertises the server. 16 | 17 | ``` 18 | nat_stop_server_advertise(key) 19 | ``` 20 | Stops advertising the server key. 21 | 22 | ``` 23 | nat_forwarder_list_lookup(): 24 | ``` 25 | Returns a list of OK NAT forwarders. 26 | 27 | ``` 28 | nat_server_list_lookup(key) 29 | ``` 30 | Returns a list of OK NAT servers. 31 | 32 | ``` 33 | nat_toggle_advertisement(enabled, threadRun=True) 34 | ``` 35 | Toggles the state of the advertisement. 36 | Notes: 37 | 38 | * threadRun controls the advertisement thread. 39 | ### Usage 40 | 41 | ??? 42 | 43 | ### Includes 44 | [advertise.repy](advertise.repy.md) 45 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/NetworkCommunication.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | This section describes the all important tools necessary for dealing with sockets. For instance, we can use the modules in the section to find out critical information about VMs, which are elements that computers whose resources you can access ([getvesselsresources.repy](getvesselsresources.repy.md)). We also have basic control over socket connections such as forcing hanging connections to quit ([sockettimeout.repy](sockettimeout.repy.md)). 4 | 5 | [nmclient.repy](nmclient.repy.md) is a also included here. It contains the backend to what could be an very useful external node manager, but by itself is not fully functional. 6 | 7 | [Back to SeattleLibWiki](../) 8 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/NodeAdvertising.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | Seattle uses a node based service where available resources are stored by value and key pairs. All available resources (otherwise known as nodes) are thus hashed to a global store (otherwise known as advertising). The Seattle Standard Library provides three different methods of advertising nodes: [SeattleLibcentralizedadvertise.repy](centralizedadvertise.repy.md), [openDHTadvertise.repy](openDHTadvertise.repy.md), or [DORadvertise.repy](DORadvertise.repy.md). 4 | 5 | [SeattleLibcentralizedadvertise.repy](centralizedadvertise.repy.md) uses a centralized hash table to store all the values, which runs on the main Seattle server. This may be desirable to users who do not want to depend on the OpenDHT client in case of failure, etc. 6 | 7 | [openDHTadvertise.repy](openDHTadvertise.repy.md) uses the OpenDHT client to store key value pairs. 8 | 9 | [DORadvertise.repy](DORadvertise.repy.md). uses the CNRI service. 10 | 11 | One of these services may be chosen for exclusive use, but [advertise.repy](advertise.repy.md) is the most common choice, as it combines all three services and allows the user to pick a specific implementation of node advertising. 12 | 13 | [Back to SeattleLibWiki](../) 14 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/ProgrammerResources.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | This section contains all the extraneous data structures that Seattle supports, in addition to all the various backend modules that makes seash work. There are several utilities which can help with basic programming tasks such as [math.repy](math.repy.md), [random.repy](random.repy.md), and etc. Notably modules include: 4 | 5 | [safe_eval.repy](safe_eval.repy.md) allows one to safely evaluate strings, free from the context of whatever it is in. 6 | 7 | [argparse.repy](argparse.repy.md) checks command line arguments and separates in a way that is usable. This utility can be used within the context of whatever program it's in, making this a very useful module. 8 | 9 | [urlparse.repy](urlparse.repy.md) parses urls for network communication purposes. This is primarily used in the XML parsing section. 10 | 11 | Please note that the The backend modules that are located here are mostly ones that the average user would not have to consider. 12 | 13 | [Back to SeattleLibWiki](../) 14 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/Time.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | Like many services, Seattle contains its own time module, which provides various time related functions like getting the time and updating the time. 4 | 5 | Seattle provides both TCP and NTP time services. TCP, otherwise known as Transmission Control Protocol, utilizes timestamps to keep track of time. NTP, or Network Time Protocol, synchronizes clocks on different machines by using various jitter buffers. 6 | 7 | In any case, all programmers should include [time.repy](time.repy.md), which ties both services into one. 8 | 9 | [Back to SeattleLibWiki](../) 10 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/UrlParsingAndXml.md: -------------------------------------------------------------------------------- 1 | ### Description 2 | 3 | This class of modules supported by the Seattle Standard Library deal with URLs and XML. In most cases, Seattle utilizes the XML-RPC protocol to communicate between computers. The remote procedure calls (RPC) are achieved by using HTTP requests. In XML-RPC, the parameter for the the HTTP requests can be nested, in this case with XML. 4 | 5 | In any case that the XML-RPC service is desirable, [xmlrpc_server.repy](xmlrpc_server.repy.md) should be used. There rest of the modules described here which are mostly helper modules to [xmlrpc_server.repy](xmlrpc_server.repy.md). 6 | 7 | Note also there are many parallels between these modules and equivalent Python modules. Please email [shurui@cs.washington.edu] if I have failed to link any. 8 | 9 | [Back to SeattleLibWiki](../) 10 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/advertise.repy.md: -------------------------------------------------------------------------------- 1 | # advertise.repy 2 | 3 | This module allows for different node to be announced via different services (central advertise service, OpenDHT, or both). 4 | 5 | ### Functions 6 | ``` 7 | advertise_announce(key, value, ttlval, concurrentevents=2, graceperiod=10, timeout=60) 8 | ``` 9 | Adds a value to the OpenDHT, central advertise service, or both. 10 | 11 | Notes: 12 | * ttlval must be a positive integer that describes the amount of time until the value expires. 13 | * concurrentevents is how many services to announce in parallel. 14 | * graceperiod and timeout are both optional parameters. 15 | 16 | ### Example Usage 17 | 18 | ``` 19 | advertise_lookup(key, maxvals=100, lookuptype=None, concurrentevents=2, graceperiod=10, timeout=60) 20 | ``` 21 | Lookup an value stored at the given key in OpenDHT, central advertise service, or both. 22 | 23 | Notes: 24 | * lookuptype defaults to look in all types. 25 | * lookuptype, concurrentevents, graceperiod, and timeout are all optional. 26 | 27 | ### Usage 28 | 29 | ``` 30 | #retrieve node list from advertise_lookup 31 | node_list = advertise_lookup(node_state_pubkey, maxvals = 10*1024*1024, lookuptype=[server_lookup_type]) 32 | ``` 33 | 34 | ``` 35 | #within node manager 36 | advertise_announce(advertisekey, str(my_name), adTTL) 37 | ``` 38 | 39 | ### Includes 40 | [SeattleLib/listops.repy](listops.repy.md) 41 | 42 | [SeattleLib/openDHTadvertise.repy](openDHTadvertise.repy.md) 43 | 44 | [SeattleLibcentralizedadvertise.repy](centralizedadvertise.repy.md) 45 | 46 | [SeattleLib/DORadvertise.repy](DORadvertise.repy.md) 47 | 48 | [SeattleLib/parallelize.repy](parallelize.repy.md) 49 | 50 | [Back to NodeAdvertising](NodeAdvertising.md) 51 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/base64.repy.md: -------------------------------------------------------------------------------- 1 | # base64.repy 2 | 3 | Provides a service for encoding data as specified in RFC 3548 (a subset of the python base64 service). See http://docs.python.org/library/base64.html and http://en.wikipedia.org/wiki/Base64 for more information. 4 | 5 | ### Functions 6 | 7 | ``` 8 | base64_b64encode(s, altchars=None) 9 | ``` 10 | Returns an encoded string s using Base64. 11 | 12 | Note: 13 | 14 | * altchars can be used to describe additional characters into the alphabet. 15 | * Default altchars uses the standard Base64 alphabet. 16 | * altchars must be at least 2 characters if not None. 17 | 18 | ``` 19 | base64_b64decode(s, altchars=None) 20 | ``` 21 | Returns after decoding a previously encrypted string s. 22 | 23 | Note: 24 | 25 | * TypeError exception is raised if an error occurs during encoding. 26 | * Ignored characters not in the standard Base64 alphabet. 27 | * See the encoding function for altchars parameters. 28 | 29 | ``` 30 | base64_standard_b64encode(s) 31 | ``` 32 | Like the above encoding function, but this only uses the standard Base64 alphabet. 33 | 34 | ``` 35 | base64_standard_b64decode(s) 36 | ``` 37 | Like the above decoding function, but this only uses the standard Base64 alphabet. 38 | 39 | Note: 40 | * TypeError exception is raised if an decoding error occurs. 41 | 42 | ``` 43 | base64_urlsafe_b64decode(s) 44 | ``` 45 | Decode a Base64 encoded string using a URL-safe alphabet, substituting - instead of + and _ instead of / in the standard Base64 alphabet. 46 | 47 | Note: 48 | * TypeError exception is raised if an decoding error occurs. 49 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/binascii.repy.md: -------------------------------------------------------------------------------- 1 | # binascii.repy 2 | 3 | This module contains method of converting from binary to ASCII representations and vice versa. 4 | 5 | ### Functions 6 | 7 | ``` 8 | binascii_a2b_hex(hexstr) 9 | ``` 10 | Returns the binary representation of hexstr. 11 | Note: 12 | 13 | * TypeError exception is thrown if hexstr has odd length 14 | 15 | ``` 16 | def binascii_b2a_hex(binary_data) 17 | ``` 18 | Returns the ASCII repsentation of binary_data. 19 | 20 | ### Usage 21 | 22 | ``` 23 | name = shurui; 24 | binary_value = binasciia2b_hex(name) 25 | ``` 26 | 27 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/centralizedadvertise.repy.md: -------------------------------------------------------------------------------- 1 | # centralizedadvertise.repy 2 | 3 | This module provides a hash table service for nodes. Adds and removes entries to a centralized hash table. This service runs on seattle.cs, which is also known as satya.cs. See CentralizedAdvertiseService for more details. 4 | 5 | ### Functions 6 | 7 | 8 | 9 | ``` 10 | centralizedadvertise_announce(key, value, ttlval) 11 | ``` 12 | Announce a key / value pair into the CHT. 13 | 14 | Notes: 15 | * ttlval must be a positive integer that describes the amount of time until the value expires. 16 | * Network / Timeout exception are raised if there are connection errors. 17 | 18 | ``` 19 | centralizedadvertise_lookup(key, maxvals=100) 20 | ``` 21 | Returns the valid values stored by the key in a list. 22 | 23 | Notes: 24 | * maxvals must be a positive integer that describes how many values to return. 25 | * Network / Timeout exception are raised if there are connection errors. 26 | 27 | ### Example Usage 28 | 29 | ``` 30 | #advertize that the current client has started 31 | my_advetisement_info = getmyip() + ":"+ str(mycontext['myport']) + ":" + str(keyinfo) 32 | centralizedadvertise_announce(mycontext['experiment_name'], my_advetisement_info, ADVERTISE_PERSIST) 33 | ``` 34 | 35 | ### Includes 36 | [sockettimeout.repy](sockettimeout.repy.md) 37 | 38 | [serialize.repy](serialize.repy.md) 39 | 40 | 41 | 42 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/cv.repy.md: -------------------------------------------------------------------------------- 1 | # cv.repy 2 | 3 | Much like how [wiki:SeattleLib/semaphore.repy] provide a level of abstraction to locks, this module provides an even higher level in terms of conditional variables. A programmer could use this to implement locking within a program. For instance, the [wiki:SeattleLib/parallelize.repy] module requires locking to prevent errors. 4 | 5 | ### Functions 6 | 7 | ``` 8 | def cv_create(): 9 | ``` 10 | Create a new condition variable and return it to the user. Returns the semaphore handle. 11 | 12 | 13 | ``` 14 | def cv_destroy(handle): 15 | ``` 16 | Destroy the condition variable. 17 | 18 | Notes: 19 | 20 | * handle is the condition variable handle. 21 | * All threads waiting on this condition variable have been notified by a call to notify_one or notify_all. No other function calls in this module should be called concurrently or after. The fact that some other function call in this module might raise an exception while the condition variable is getting destroyed implies a design error in client's code. 22 | * Raises ValueError if the condition variable handle is invalid. 23 | 24 | ``` 25 | def cv_wait(handle): 26 | ``` 27 | Wait for a condition. 28 | 29 | Notes: 30 | 31 | * handle is the condition variable handle. 32 | * Raises ValueError if the condition variable handle is invalid. 33 | 34 | 35 | ``` 36 | def cv_notify_one(handle): 37 | ``` 38 | Notify the next thread in line that the condition was met. 39 | 40 | Notes: 41 | 42 | * handle is the condition variable handle. 43 | * Raises ValueError if the condition variable handle is invalid. 44 | 45 | 46 | ``` 47 | def cv_notify_all(handle): 48 | ``` 49 | Notify all waiting threads that the condition was met. 50 | 51 | Notes: 52 | 53 | * handle is the condition variable handle. 54 | * Raises ValueError if the condition variable handle is invalid. 55 | 56 | ### Includes 57 | 58 | [wiki:SeattleLib/semaphore.repy] -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/domainnameinfo.repy.md: -------------------------------------------------------------------------------- 1 | # domainnameinfo.repy 2 | 3 | This module provides one function - the country which the given hostname is from. The hostname is a string which describes a Seattle user. See below for more details. 4 | 5 | 6 | ### Functions 7 | 8 | ``` 9 | def domainnameinfo_gethostlocation(hostname): 10 | ``` 11 | Given a hostname, returns a string that contains the country the hostname is from. 12 | 13 | Notes: 14 | 15 | * hostname is a hostname string that we want information about. For example: 'planetlab-2.di.fc.ul.pt' 16 | * Raises UnknownHostLocationError: if we don't know the location of the hostname. Only countries with standard abbreviations are recongnized by this module. 17 | * Raises TypeError if hostname isn't a string. 18 | * Returns a string which has the country. 19 | 20 | 21 | ### Usage 22 | 23 | ``` 24 | assert(domainnameinfo_gethostlocation('amazon.uk') == 'United Kingdom') 25 | assert(domainnameinfo_gethostlocation('microsoft.us') == 'United States') 26 | ``` 27 | 28 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/geoip_client.repy.md: -------------------------------------------------------------------------------- 1 | # geoip_client.repy 2 | 3 | XMl-RPC client for remote GeoIP server. Given an IP:port of a GeoIP XML-RPC server, allows location lookup of hostnames and IP addresses. 4 | 5 | This module is an addition to the basic implementation found in [wiki:SeattleLib/xmlrpc_client.repy]. A GeoIP client essentially allows for on the go location lookup of various nodes around the world. This module is optional and not required, as clients can freely choose to use the more tested [wiki:SeattleLib/xmlrpc_client.repy] instead. 6 | 7 | 8 | ### Functions 9 | 10 | ``` 11 | def geoip_init_client(url="http://geoip.poly.edu:12679"): 12 | ``` 13 | Creates a new GeoIP XML-RPC client object. 14 | 15 | Notes: 16 | 17 | * url is a URL (protocol://ip:port) of GeoIP XML-RPC server. 18 | * url is defaulted to http://geoip.poly.edu:12679 19 | 20 | 21 | ``` 22 | def geoip_record_by_addr(addr): 23 | ``` 24 | Request location data of provided IP address from GeoIP XML-RPC server 25 | 26 | Notes: 27 | 28 | * addr is the IP address of which to look up location. 29 | * Returns a dictionary of location data of provided IP address. 30 | 31 | 32 | ``` 33 | def geoip_record_by_name(name): 34 | ``` 35 | Request location data of provided hostname from GeoIP XML-RPC server 36 | 37 | Notes: 38 | 39 | * name is the hostname of which to look up location. 40 | * Returns a dictionary of location data of provided hostname. 41 | 42 | 43 | ``` 44 | def geoip_location_str(location_dict): 45 | ``` 46 | Pretty-prints a location specified by location_dict as a comma-separated list. Prints location info as specifically as it can, according to the 47 | 48 | format 'CITY, STATE/PROVINCE, COUNTRY'. 49 | location_dict['city'], location_dict['region_name'], and 50 | location_dict['country_name'] are added if defined, and 51 | location_dict['region_name'] is added if the location is in the US or Canada. 52 | 53 | Notes: 54 | 55 | * location_dict is the dictionary of location information, as returned by a call to geoip_record_by_addr or geoip_record_by_name. 56 | * Returns a string representation of a location. 57 | 58 | 59 | ### Usage 60 | 61 | ``` 62 | client = geoip_init_client(server_address) 63 | # Where server_address is the ip address of a remote GeoIP XMl-RPC server. 64 | ``` 65 | 66 | Example: 67 | ``` 68 | geoip_init_client(["http://geoipserver.poly.edu:12679"]) 69 | # with no parameters, geoip_init_client() uses 70 | # ["http://geoipserver.poly.edu:12679", "http://geoipserver2.poly.edu:12679"] by default 71 | # then it assigns the returned client to geoip_clientlist 72 | 73 | client=geoip_clientlist[0] 74 | client.send_request("record_by_addr", ("173.194.33.16",), 100) 75 | ``` 76 | 77 | ### Includes 78 | 79 | [wiki:SeattleLib/xmlrpc_client.repy] -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/getvesselsresources.repy.md: -------------------------------------------------------------------------------- 1 | # getvesselsresources.repy 2 | 3 | This module computes the Given a vessel name the functions finds the correct vessel and print out all the resources available. 4 | 5 | A client may find this module useful when it is unclear how many resources a vessel is using or if its unclear how much resources one should allocate to a particularly vessel. 6 | 7 | 8 | ### Functions 9 | 10 | 11 | ``` 12 | def getvesselresources_portnum(portnum, ipaddr=getmyip(), portval=1224): 13 | ``` 14 | Finds all the vessels that have the port number requested and then return a dictionary with all the vessels and their resources that are available. 15 | 16 | Notes: 17 | 18 | * portnum(int) is the port number that the user is looking for 19 | * ipaddr is the ip address for which to get the vessels for. By default it is local ip address. 20 | * portval-the port number for which to find vessels. By defaults it is 1224. 21 | * Throws a ValueError exception if the resource file is not formatted well. 22 | * Returns a dictionary where the keys are the vessel name and the values are a list of resources. Resources can be accessed by returnresult[vessel_name][resource_name] 23 | * If the port value or ip address provided is not associated with the resource port number, then an empty dictionary is returned. 24 | 25 | 26 | ``` 27 | def getvesselresources_vesselname(vesselname, ipaddr=getmyip(), portval=1224): 28 | ``` 29 | Given a vessel name (and maybe ip address and the port value), find all the resources available for that vessel and return a dictionary of the resources for that vessel. 30 | 31 | Notes: 32 | 33 | * vesselname(string) is the vessel that the user wants the resource for 34 | * ipaddr is the ip address for which to get the vessels for. By default it is local ip address. 35 | * portval is the port number for which to find vessels. By defaults it is 1224. 36 | * Throws ValueError exception on an invalide vessel name input. 37 | * Throws a ValueError exception if the resource file is not formatted well. 38 | * Returns a dictionary where the keys are the vessel name and the values are a list of resources. Resources can be accessed by returnresult[vessel_name][resource_name] 39 | * If the port value or ip address provided is not associated with the resource port number, then an empty dictionary is returned. 40 | 41 | 42 | ### Includes 43 | 44 | 45 | [wiki:SeattleLib/nmclient.repy], 46 | [wiki:SeattleLib/rsa.repy] -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/httpserver.repy.md: -------------------------------------------------------------------------------- 1 | # httpserver.repy 2 | 3 | This library abstracts away the details of the HTTP protocol, providing an alternative to calling a user-supplied function on each request. The return value of the user-supplied function determines the response that is sent to the HTTP client. This allows the end user to not have to worry about the semantics of communicating with an HTTP server. 4 | 5 | In many ways, this module is analogous to the http.server module in Python documentation. See http://docs.python.org/py3k/library/http.server.html. 6 | 7 | 8 | ### Classes & Functions 9 | 10 | ``` 11 | def httpserver_registercallback(addresstuple, cbfunc): 12 | ``` 13 | Registers a callback function on the (host, port). 14 | 15 | Notes: 16 | 17 | * addresstuple is an address 2-tuple to bind to: ('host', port). 18 | * cbfunc is the callback function to process requests. It takes one argument, which is a dictionary describing the HTTP request. It looks like this (just an example): 19 | { 20 | 'verb': 'HEAD', 21 | 'path': '/', 22 | 'querystr': 'foo=bar&baz', 23 | 'querydict': { 'foo': 'bar', 'baz': None } 24 | 'version': '0.9', 25 | 'datastream': object with a file-like read() method, 26 | 'headers': { 'Content-Type': 'application/x-xmlrpc-data'}, 27 | 'httpdid': 17, 28 | 'remoteipstr': '10.0.0.4', 29 | 'remoteportnum': 54001 30 | } 31 | ('datastream' is a stream of any HTTP message body data sent by the 32 | client.) 33 | 34 | It is expected that this callback function returns a dictionary of: 35 | { 36 | 'version': '0.9' or '1.0' or '1.1', 37 | 'statuscode': any integer from 100 to 599, 38 | 'statusmsg' (optional): an arbitrary string without newlines, 39 | 'headers': { 'X-Header-Foo': 'Bar' }, 40 | 'message': arbitrary string 41 | } 42 | * Exceptions (TypeError, ValueError, KeyError, IndexError) are raised if arguments to this function are malformed. 43 | * The (hostname, port) tuple cannot be restricted, already taken, etc. 44 | * Returns a handle for the listener (an httpdid). This can be used to stop the server. 45 | 46 | 47 | ``` 48 | def httpserver_stopcallback(callbackid): 49 | ``` 50 | Removes an existing callback function, i.e. stopping the server. 51 | 52 | Notes: 53 | 54 | * callbackid is the id returned by httpserver_registercallback(). 55 | * Raises IndexError or KeyError if the id is invalid or has already been deleted. 56 | 57 | 58 | ### Includes 59 | 60 | [wiki:SeattleLib/urllib.repy](urllib.repy.md) 61 | 62 | [wiki:SeattleLib/urlparse.repy](urlparse.repy.md) 63 | 64 | [wiki:SeattleLib/uniqueid.repy](uniqueid.repy.md) 65 | 66 | [wiki:SeattleLib/sockettimeout.repy](sockettimeout.repy.md) 67 | 68 | [wiki:SeattleLib/httpretrieve.repy](httpretrieve.repy.md) 69 | 70 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/listops.repy.md: -------------------------------------------------------------------------------- 1 | # listops.repy 2 | 3 | A simple library of list commands that allow the programmer to do list composition operations. These lists can be used as data structures for whatever purposes. 4 | 5 | 6 | ### Functions 7 | 8 | ``` 9 | def listops_difference(list_a,list_b): 10 | ``` 11 | Return a list that has all of the items in list_a that are not in list_b. Duplicates are removed from the output list 12 | 13 | Notes: 14 | 15 | * Raises TypeError if list_a or list_b is not a list. 16 | * Returns a list containing list_a - list_b 17 | 18 | 19 | ``` 20 | def listops_union(list_a,list_b): 21 | ``` 22 | Return a list that has all of the items in list_a or in list_b. Duplicates are removed from the output list 23 | 24 | Notes: 25 | 26 | * Raises TypeError if list_a or list_b is not a list. 27 | * Returns a list containing list_a ∪ list_b 28 | 29 | 30 | ``` 31 | def listops_intersect(list_a,list_b): 32 | ``` 33 | Return a list that has all of the items in both list_a and list_b. Duplicates are removed from the output list 34 | 35 | Notes: 36 | 37 | * Raises TypeError if list_a or list_b is not a list. 38 | * Returns a list containing list_a ∩ list_b 39 | 40 | 41 | ``` 42 | def listops_uniq(list_a): 43 | ``` 44 | Return a list that has no duplicate items 45 | 46 | Notes: 47 | 48 | * Raises TypeError if list_a is not a list. 49 | * Returns a list containing list_a which contains only unique values. -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/math.repy.md: -------------------------------------------------------------------------------- 1 | # math.repy 2 | 3 | An incomplete math library. Provides a few functions. 4 | 5 | 6 | ### Functions 7 | 8 | ``` 9 | def math_ceil(x): 10 | ``` 11 | Returns the rounded value of x. 12 | 13 | 14 | ``` 15 | def math_floor(x): 16 | ``` 17 | Returns the floored value of x. 18 | 19 | 20 | ``` 21 | def math_log(X, base=math_e, epsilon=1e-16): 22 | ``` 23 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/md5py.repy.md: -------------------------------------------------------------------------------- 1 | # md5py.repy 2 | 3 | Encrypts a string of arbitrary length into a 128-bit "fingerprint", creating what is essentially a digital signature. Based on the python version. See http://docs.python.org/library/md5.html and http://en.wikipedia.org/wiki/MD5 for more details. 4 | 5 | ### Functions 6 | ``` 7 | update(self, inBuf) 8 | ``` 9 | Updates the md5 object with the string inbuf. Repeated calls result in the concatenation of the arguments. 10 | 11 | 12 | Notes: 13 | * self describes the md5 object. 14 | 15 | ``` 16 | digest(self) 17 | ``` 18 | Return the digest of the strings passed to the update(self, inBuf) so far. This is a 16-byte string which may contain non-ASCII characters, including null bytes. 19 | 20 | ``` 21 | hexdigest(self) 22 | ``` 23 | Similar to digest(self) expect this returns the hexadecimal form of the digest. 24 | 25 | ``` 26 | copy(self) 27 | ``` 28 | Return a copy ('clone') of the md5 object. This can be used to efficiently compute the digests of strings that share a common initial substring. 29 | 30 | ``` 31 | md5py_new(arg=None) 32 | ``` 33 | Returns a new md5py object. 34 | 35 | ``` 36 | md5py_md5(arg=None) 37 | ``` 38 | Same as md5py_new(arg) - necessary for backward compatibility reasons. 39 | 40 | ### Usage 41 | 42 | ``` 43 | testdigest = binascii_b2a_hex(md5py_new("").digest()) 44 | assert(testdigest == 'd41d8cd98f00b204e9800998ecf8427e') 45 | 46 | testdigest = binascii_b2a_hex(md5py_new("a").digest()) 47 | assert(testdigest == '0cc175b9c0f1b6a831c399e269772661') 48 | ``` -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/ntp_time.repy.md: -------------------------------------------------------------------------------- 1 | # ntp_time.repy 2 | 3 | An implementation of [wiki:timeinterface.repy] 4 | 5 | This module handles getting the time from an external source, via UDP. It gets the remote time once and then uses the offset from the local clock from then on to return the current time. 6 | 7 | To use this module: 8 | 9 | 1. Make a call to time_updatetime(localport) with a local UDP port that you have permission to send/recv on. This will contact some random subset of NTP servers to get and store the local time. 10 | 2. Call time_gettime() which will return the current time (in seconds). 11 | 3. time_gettime() can be called at any point after having called time_updatetime(localport) since time_gettime() simply calculates how much time has elapsed since the local time was originally acquired from one of the NTP servers. 12 | 13 | ### Functions 14 | ``` 15 | ntp_time_updatetime(localport) 16 | ``` 17 | Obtains and stores the local time from a subset of NTP servers. 18 | 19 | 20 | Notes: 21 | * localport is the local port used when connecting to NTP servers. 22 | * TimeError is raised if anything (NTP server, getip()) times out. 23 | * If time_gettime() fails, then time_updatetime(localport) can be called again to sample time from another random set of NTP servers. 24 | 25 | ### Usage 26 | 27 | ? 28 | 29 | ### Includes 30 | 31 | [wiki:SeattleLib/random.repy] -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/openDHTadvertise.repy.md: -------------------------------------------------------------------------------- 1 | # openDHTadvertise.repy 2 | Utilizes OpenDHT to advertise availability of nodes. 3 | 4 | ### Functions 5 | ``` 6 | openDHTadvertise_announce(key, value, ttlval, concurrentevents=5, proxiestocheck=5, timeout=None) 7 | ``` 8 | Announces a (key, value) pair to OpenDHT. 9 | 10 | Notes: 11 | * ttlval must be a positive integer that describes the amount of time until the value expires. 12 | * Exception will be raised if XML-RPC connections fail. 13 | 14 | ``` 15 | openDHTadvertise_lookup(key, maxvals=100, concurrentevents=5, proxiestocheck=5, timeout=None): 16 | ``` 17 | Looks up the (key, value) pair from the OpenDHT. 18 | 19 | Notes: 20 | * maxvals must be a positive integer that describes how many values to return. 21 | 22 | ``` 23 | openDHTadvertise_checkserver(servername) 24 | ``` 25 | Check to see if a server is ready for OpenDHT. 26 | 27 | ``` 28 | openDHTadvertise_get_proxy_list(maxnumberofattempts=5, concurrentevents=5) 29 | ``` 30 | Retrieves a list of active OpenDHT proxies. 31 | 32 | ### Example Usage 33 | ...? only located in .svn directories? 34 | 35 | ### Includes 36 | [include random.repy](random.repy.md) 37 | 38 | [include sha.repy](sha.repy.md) 39 | 40 | [include xmlrpc_client.repy](xmlrpc_client.repy.md) 41 | 42 | [include parallelize.repy](parallelize.repy.md) 43 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/priority_queue.repy.md: -------------------------------------------------------------------------------- 1 | # priority_queue.repy 2 | 3 | This module implements a Priority Queue class using a heap. Expected runtime: getMinimum: O(1), deleteMinimum: O(log n), and insert: O(log n). 4 | 5 | 6 | ### Functions 7 | 8 | ``` 9 | def getMinimum(self): 10 | ``` 11 | Gets the element with the minimum priority. Returns a tuple (priority, value). None if there are no nodes. 12 | 13 | 14 | ``` 15 | def insert(self, priority, value): 16 | ``` 17 | Inserts a new node into the Priority Queue. 18 | 19 | Notes: 20 | * priority is the the priority for this new node. 21 | * value is the value to associate with this priority. 22 | 23 | 24 | ``` 25 | def deleteMinimum(self): 26 | ``` 27 | Deletes and returns the element with the minimum priority. Returns a tuple (priority, value). None if there are no nodes. 28 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/repypp.py.md: -------------------------------------------------------------------------------- 1 | # repypp.py 2 | 3 | As repy does not allow the use of import statements, the repy preprocessor provides programmers with a means of including code from other modules along with their programs. After passing through the preprocessor, the resulting file will contain the contents of the included modules, inlined alongside your program code. From a programming perspective, this is very similar to **from module import ***. 4 | 5 | Files needed preprocessing should have a ***.mix** extension, and processed files should have a ***.repy** extension. Note "include" must be the first character on the line. (no indentation allowed!). 6 | 7 | Source mycode.mix: 8 | ```repy 9 | include serialize.repy 10 | include base64.repy 11 | 12 | # Your code here... 13 | def my_function(): 14 | """ 15 | ... 16 | """ 17 | ``` 18 | 19 | Then in seattle_repy do: 20 | 21 | ```python repypp.py mycode.mix mycode.repy``` 22 | 23 | After this, mycode.repy will be automatically generated for you. And you can now run this in repy. Every time you make any changes to mycode.mix, do the above command again to regenerate updated mycode.repy file. Result mycode.repy looks like this: 24 | 25 | ```repy 26 | #begin include serialize.repy 27 | 28 | #end include serialize.repy 29 | #begin include base64.repy 30 | 31 | #end include base64.repy 32 | 33 | # Your code here... 34 | def my_function(): 35 | """ 36 | ... 37 | """ 38 | ``` 39 | 40 | ### Functions 41 | 42 | ``` 43 | def processfiledata(stringlist): 44 | ``` 45 | Scans lines with "include" and retrieves the correct module to be included. 46 | 47 | 48 | ``` 49 | def processfile(filename): 50 | ``` 51 | Builds the correct file into a dictionary and returns the file's data. 52 | 53 | 54 | ``` 55 | def recursive_build_outdata(filename, includedfiles,filedict): 56 | ``` 57 | Like above, but recursively builds and returns the data from file to be imported. 58 | 59 | 60 | 61 | 62 | ### Usage 63 | 64 | To use this module via the command line, simply pass the source file and the output file names: 65 | ```sh 66 | $ python repypp.py sourcefilefn.mix outputfilefn.repy 67 | ``` -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/repyunit.repy.md: -------------------------------------------------------------------------------- 1 | # repyunit.repy 2 | 3 | A unit testing suite for repy. Based on JUnit in java and the Python unittest suite. See http://docs.python.org/library/unittest.html for more information. 4 | 5 | 6 | ### Classes & Functions 7 | 8 | ``` 9 | class repyunit_TestResult(object): 10 | ``` 11 | Hold test result statistics and process test outcomes. Subclasses may extend this class to provide additional functionality, like printing output. 12 | 13 | The test_count instance member holds the number of tests executed; success_count holds the number of tests executed that were successful, failure_count holds the number of tests executed that have failed, error_count holds the number of tests executed that resulted in an error. 14 | 15 | 16 | 17 | 18 | ``` 19 | class repyunit_TestCase(object): 20 | ``` 21 | Encapsulate a test case. This class is the workhorse of the module. It defines the methods required to run tests by extending the class. 22 | 23 | Under regular Python, the class name would be extracted automatically, but this is not possible in Repy. Thus, subclasses must override the get_class_name method. For the same reasons, subclasses must also override the get_test_method_names method if run_test is not overwritten or when additional tests are defined. 24 | 25 | 26 | 27 | 28 | ``` 29 | class repyunit_TestSuite(object): 30 | ``` 31 | Encapsulate a collection of unit tests. This class is used to group together repyunit_TestCase instances for execution. 32 | 33 | 34 | 35 | 36 | ``` 37 | def repyunit_load_tests_from_test_case(test_case): 38 | ``` 39 | Populate a repyunit_TestSuite with all tests from a repyunit_TestCase to run all the tests automatically. 40 | 41 | Notes: 42 | 43 | * test_case is the repyunit_TestCase from which to populate the repyunit_TestSuite. 44 | * Returns a repyunit_TestSuite loaded with all the tests from the given test case. 45 | 46 | 47 | 48 | 49 | ``` 50 | def repyunit_text_test_run(test_case): 51 | ``` 52 | Run all tests given in test and print statistical information in textual format. 53 | 54 | Notes: 55 | 56 | * test_case is a repyunit_TestCase or repyunit_TestSuite to run the test(s). 57 | * Prints in the following form: 58 | ``` Ran %d tests: %d successes, %d failures, %d errors ``` 59 | 60 | 61 | 62 | ### Usage 63 | 64 | To get the testing results. 65 | ``` 66 | test = FoobarTestSuite() 67 | test.run(repyunit_TestResult()) 68 | ``` 69 | 70 | 71 | 72 | To run testing cases. 73 | ``` 74 | class FooTestCase(repyunit_TestCase): 75 | def get_class_name(self): 76 | return "FooTestCase" 77 | def run_test(self): 78 | self.assert_true(True) 79 | 80 | test = FooTestCase() 81 | test.run() 82 | ``` 83 | 84 | 85 | 86 | To run a testing suite. 87 | ``` 88 | suite = repyunit_TestSuite(). 89 | suite.add_test(FoobarTest()) 90 | suite.add_test(FoobarTest("test_baz")) 91 | suite.run(CustomTestResult()) 92 | ``` -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/safe_eval.repy.md: -------------------------------------------------------------------------------- 1 | # safe_eval.repy 2 | 3 | Provides a method to safely evaluate a string. See usage for more information. 4 | 5 | 6 | ### Functions 7 | 8 | 9 | ``` 10 | def safe_eval(code, context=None): 11 | ``` 12 | Allows code to be safely evaluated. 13 | 14 | Notes: 15 | 16 | * code is the code to be evaluated. 17 | * context is the context to evaluate the code in. If not specified, an empty context is used. 18 | 19 | 20 | ### Usage 21 | 22 | 23 | ``` 24 | To perform the evaluation, a new entry '_result' is added into the 25 | context prior to execution and removed afterward. This will override any 26 | existing value. The code is evaluated by prepending '_result = ', so 27 | make sure the code still is sane after this. See safe_exec. 28 | 29 | As an example: 30 | val = safe_eval('123') 31 | print val # 123 32 | 33 | val = safe_eval('10 * 20 / 5') 34 | print val # 40 35 | 36 | val = safe_eval('print "Hi!"') # Exception, _result = print "Hi!" is not valid. 37 | ``` -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/semaphore.repy.md: -------------------------------------------------------------------------------- 1 | # semaphore.repy 2 | 3 | Although repy already contains a locking method, this module provides a level of abstraction by giving the end users the possibility of using semaphores. Though semaphores are slightly superior to locks, this module is not required to run anything. Using this purely at the programmer's discretion and need. 4 | 5 | ### Functions 6 | 7 | ``` 8 | def semaphore_create(): 9 | ``` 10 | Creates a new semaphore and return it to the user. Returns an unique semaphore handle. 11 | 12 | 13 | ``` 14 | def semaphore_destroy(semaphorehandle): 15 | ``` 16 | Clean up a semaphore that is no longer needed. All currently blocked threads will be unblocked. All future uses of the semaphore will fail. 17 | 18 | Notes: 19 | 20 | * semaphorehandle is the semaphore handle to destroy 21 | * Returns True if it cleaned up the semaphore handle, False if the handle was already cleaned up. 22 | 23 | 24 | ``` 25 | def semaphore_up(semaphorehandle): 26 | ``` 27 | Increment a sempahore (possibly unblocking a thread) 28 | 29 | Notes: 30 | 31 | * semaphorehandle is the semaphore handle generated by the create method. 32 | * Raises ValueError if the semaphorehandle is invalid. 33 | 34 | 35 | ``` 36 | def semaphore_down(semaphorehandle): 37 | ``` 38 | Decrement a sempahore (possibly blocking this thread) 39 | 40 | Notes: 41 | 42 | * semaphorehandle is the semaphore handle generated by the create method. 43 | * Raises ValueError if the semaphorehandle is invalid. 44 | 45 | ### Includes 46 | 47 | [uniqueid.repy](uniqueid.repy.md) 48 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/serialize.repy.md: -------------------------------------------------------------------------------- 1 | # serialize.repy 2 | 3 | Serializes and deserializes built-in repy types. 4 | This includes strings, integers, floats, booleans, None, complex, tuples, lists, sets, frozensets, and dictionaries. 5 | 6 | ### Functions 7 | ``` 8 | serialize_serializedata(data) 9 | ``` 10 | Convert a data item of any type into a string such that it can be later deserialized. Returns this string. 11 | 12 | Notes: 13 | * data can be of any type except objects. 14 | 15 | ``` 16 | serialize_deserializedata(datastr) 17 | ``` 18 | Convert a serialized data string back into its original type. 19 | 20 | Notes: 21 | * datastr is the string to be deseriailized. 22 | * ValueError is raised if the string is corrupted. 23 | * TypeError is raised if the type of 'data' isn't allowed. 24 | 25 | ### Usage 26 | 27 | ? -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/servicelookup.repy.md: -------------------------------------------------------------------------------- 1 | # servicelookup.repy 2 | 3 | This simple module has one purpose, returning a list of the vessels that match the owner's key / information. This module may be helpful in displaying information or for certain debugging purposes. 4 | 5 | 6 | ### Functions 7 | 8 | 9 | ``` 10 | def servicelookup_get_servicevessels(vesseldict, ownerkey, ownerinfo): 11 | ``` 12 | Return a list of vessels that match the owner key and contain the ownerinfo 13 | 14 | Notes: 15 | 16 | * vesseldict is the vesselinfo dictionary 17 | * ownerkey is the ownerkey string to match 18 | * ownerinfo is the owner information string that contained in the vessels 19 | 20 | 21 | ### Includes 22 | 23 | [wiki:SeattleLib/rsa.repy] -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/session.repy.md: -------------------------------------------------------------------------------- 1 | # session.repy 2 | 3 | This module wraps communications in a signaling protocol. The purpose is to overlay a connection-based protocol with explicit message signaling. Session lets both the sides of the communication send and receive messages over a connected stream. 4 | 5 | ### Functions 6 | 7 | ``` 8 | def session_recvmessage(socketobj): 9 | ``` 10 | Grabs the next message off the socket connection. 11 | 12 | 13 | ``` 14 | def session_sendmessage(socketobj,data): 15 | ``` 16 | Sends the message to the socket connection. 17 | 18 | Notes: 19 | 20 | * The protocol is to send the size of the message followed by 21 | n and then the message itself. The size of a message must be able to be stored in sessionmaxdigits. A size of -1 indicates that this side of the connection should be considered closed. 22 | 23 | 24 | ### Usage 25 | 26 | 27 | ``` 28 | # connect to the server 29 | mailserversockobj= openconn('www.gmail.com', 2525) 30 | # send my credentials 31 | session_sendmessage(mailserversockobj, 'user:justinc password:12345 32 | n') 33 | # see if it worked 34 | serverresponse = session_recvmessage(mailserversockobj) 35 | 36 | def client_connection_callback(ip, port, clientsockobj, mych, mainch): 37 | # read credentials 38 | credential_info = session_recvmessage(clientsockobj) 39 | 40 | if credentials_are_valid(credential_info): 41 | session_sendmessage(clientsockobj, 'OK') 42 | else: 43 | session_sendmessage(clientsockobj, 'ERROR: Credentials are invalid! 44 | n') 45 | ``` 46 | 47 | 48 | Note that the client will block while sending a message, and the receiver will block while recieving a message. While it should be possible to reuse the connectionbased socket for other tasks so long as it does not overlap with the time periods when messages are being sent, this is inadvisable. 49 | 50 | 51 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/sha.repy.md: -------------------------------------------------------------------------------- 1 | # sha.repy 2 | 3 | This module contains an simple implementation of the SHA-1 algorithm. SHA stands for Secure Hash Algorithm and returns secure 160 bit message digests. See http://en.wikipedia.org/wiki/SHA-1 for more details about this encryption method. This module is primarily based off the Python equivalent. See http://docs.python.org/library/sha.html for more details about the standard version. Seattle's SHA-1 implementation is capable of encrypting and decrypting strings. 4 | 5 | Please note that Seattle also has [wiki:SeattleLib/md5py.repy], which is arguably better than this. All the public functions have counterparts in [wiki:SeattleLib/md5py.repy]. 6 | 7 | ### Functions 8 | 9 | ``` 10 | def sha_new(arg=None): 11 | ``` 12 | Return a new sha crypto object. 13 | 14 | 15 | ``` 16 | def sha_hash(string): 17 | ``` 18 | Gives the hash of a string 19 | 20 | 21 | ``` 22 | def sha_hexhash(string): 23 | ``` 24 | Gives the hash of a string but returns the hash in hex form. This string has a fixed length of 32, contains only hexadecimal digits. This may be used to exchange the value safely in email or other non-binary environments. 25 | 26 | 27 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/sockettimeout.repy.md: -------------------------------------------------------------------------------- 1 | # sockettimeout.repy 2 | 3 | 4 | sockettimeout.repy is a library that causes sockets to timeout if a receive / send call blocks for more than an allotted amount of time. It implements this by using the Repy openconn, waitforconn, and stopconn functions. 5 | 6 | ### Classes & Functions 7 | 8 | 9 | ``` 10 | class _timeout_socket(): 11 | ``` 12 | Provides a socket like object which supports custom timeouts for send() and recv(). 13 | 14 | 15 | ``` 16 | def timeout_openconn(desthost, destport, localip=None, localport=None, timeout=5): 17 | ``` 18 | Returns a socket object for the user. Same as the Repy openconn. 19 | 20 | 21 | ``` 22 | def timeout_waitforconn(localip, localport, function, timeout=5): 23 | ``` 24 | Wrapper for the Repy waitforconn. 25 | 26 | 27 | ``` 28 | def timeout_stopcomm(commhandle): 29 | ``` 30 | Wrapper for the Repy stomcomm function. 31 | 32 | 33 | ### Usage 34 | 35 | ``` 36 | 37 | """ 38 | # hello world 39 | include sockettimeout.repy 40 | 41 | 42 | def mycallback(ip, port, sockobj, commhandle, listenhandle): 43 | try: 44 | # This should hang until it times out. 45 | sockobj.recv(100) 46 | except SocketTimeoutError: 47 | pass 48 | except: 49 | raise 50 | else: 51 | raise Exception("No SocketTimeoutError raised by sockobj.recv()") 52 | 53 | 54 | def server(): 55 | commhandle = timeout_waitforconn(getmyip(), 12345, mycallback) 56 | 57 | 58 | def client(): 59 | sockobj = timeout_openconn(getmyip(), 12345) 60 | # The timeout on the socket the callback gets is 5 seconds. We want to 61 | # avoid the socket being closed because we lose the reference to it. 62 | sleep(10) 63 | # The client never sends anything. 64 | 65 | 66 | def main(): 67 | server() 68 | client() 69 | sleep(.1) 70 | exitall() 71 | 72 | 73 | if callfunc == 'initialize': 74 | main() 75 | ``` 76 | 77 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/sshkey_paramiko.repy.md: -------------------------------------------------------------------------------- 1 | # sshkey_paramiko.repy 2 | 3 | This module is to be used by [wiki:SeattleLib/sshkey.repy]. This is not a stand alone module. [wiki:SeattleLib/sshkey.repy] is the wrapper module that developers should use. 4 | 5 | Note that much of this module is based off code that has been modified or taken from paramiko. It is licensed under a different license then the rest of the code and to avoid any conflict it has been separated into its own module. All the functions available in this module will be documented here for consistency, but usage details will be left out since all users should be utilizing [wiki:SeattleLib/sshkey.repy] instead. 6 | 7 | 8 | 9 | ### Functions 10 | 11 | ``` 12 | Exceptions 13 | 14 | class sshkey_paramiko_BERException (Exception): 15 | This exception indicates that the BER decoding was not recognized 16 | 17 | class sshkey_paramiko_SSHException(Exception): 18 | This exception indicates that the ssh key was unable to be decoded 19 | 20 | class sshkey_paramiko_EncryptionException(Exception): 21 | This exception indicates that the ssh key was unable to be decrypted 22 | ``` 23 | 24 | 25 | ``` 26 | class _sshkey_paramiko_BER(object): 27 | This class performs BER decoding. 28 | ``` 29 | 30 | 31 | ``` 32 | def _sshkey_paramiko_get_bytes(packet, n): 33 | ``` 34 | 35 | ``` 36 | def _sshkey_paramiko_inflate_long(s, always_positive=False): 37 | ``` 38 | 39 | ``` 40 | def _sshkey_paramiko_get_string(packet): 41 | ``` 42 | 43 | ``` 44 | def _sshkey_paramiko_generate_key_bytes(salt, key, nbytes): 45 | ``` 46 | 47 | ``` 48 | def _sshkey_paramiko_read_public_key(openfile): 49 | ``` 50 | 51 | ``` 52 | def _sshkey_paramiko_read_private_key(tag, openfile, password=None): 53 | ``` 54 | 55 | ``` 56 | def _sshkey_paramiko_decode_private_key(tag, file, password=None): 57 | ``` 58 | 59 | ### Includes 60 | 61 | [wiki:SeattleLib/sshkey.repy] 62 | 63 | [wiki:SeattleLibbase64.repy] 64 | 65 | [wiki:SeattleLib/md5py.repy] 66 | 67 | [wiki:SeattleLib/pyDes.repy] 68 | 69 | [wiki:SeattleLibbinascii.repy] 70 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/strace.py.md: -------------------------------------------------------------------------------- 1 | # strace.py 2 | 3 | This namespace can be used as an intermediary logging namespace to log calls to the Repy API functions. Essentially this allows a user or a client to trace his or her calls. Repy bases a lot of functionality off this. 4 | 5 | 6 | ### Classes & Functions 7 | 8 | 9 | ``` 10 | def traced_call(self,name,func,args,kwargs,no_return=False,print_args=True,print_result=True): 11 | ``` 12 | Traces the function call. 13 | 14 | 15 | ``` 16 | class NonObjAPICall(): 17 | ``` 18 | Used for API calls that don't return objects 19 | 20 | 21 | ``` 22 | class SocketObj(): 23 | ``` 24 | This class is used for socket objects. 25 | 26 | 27 | ``` 28 | class LockObj(): 29 | ``` 30 | This class is used for lock objects 31 | 32 | 33 | ``` 34 | class FileObj(): 35 | ``` 36 | This class is used for file objects 37 | 38 | 39 | ``` 40 | def wrapped_openconn(*args, **kwargs): 41 | ``` 42 | Wrap the call to openconn, tracing the call as well. 43 | 44 | 45 | ``` 46 | def wrapped_waitforconn(*args, **kwargs): 47 | ``` 48 | Wrap the call to waitforconn, tracing the call as well. 49 | 50 | 51 | ``` 52 | def wrap_all(): 53 | ``` 54 | Wrap all the API calls so they can be traced -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/tcp_time.repy.md: -------------------------------------------------------------------------------- 1 | # tcp_time.repy 2 | 3 | This module is an implementation of time_interface.repy 4 | 5 | To use this module, make one call to time_updatetime() to get the time from the server. This function also implicitly sets the time. Then call time_gettime() every time the current time is needed. 6 | 7 | ### Functions 8 | 9 | ``` 10 | tcp_time_updatetime(localport) 11 | ``` 12 | Opens a connection with a server hosting time_server.repy, which obtains the current time via a NTP, then calls time_settime(float(currenttime)) to set the current time to the received value form the server. 13 | 14 | 15 | Notes: 16 | * Exception raised if advertise_lookup("time_server") fails after ten tries. 17 | * Exception raised when a connection is not able to be established with any of the servers running time_server.repy. 18 | 19 | ### Usage 20 | 21 | ? 22 | 23 | ### Includes 24 | 25 | [wiki:SeattleLib/time_interface.repy] 26 | [wiki:include advertise.repy] 27 | [wiki:include random.repy] 28 | [wiki:include sockettimeout.repy] -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/textops.py.md: -------------------------------------------------------------------------------- 1 | # textops.py 2 | 3 | Provides text-processing utility functions loosely modelled after GNU coreutils. See http://www.gnu.org/software/coreutils/ for more details. 4 | 5 | Currently supports a subset of the functionality of grep and wc. 6 | 7 | 8 | ### Functions 9 | 10 | ``` 11 | def textops_rawtexttolines(text, linedelimiter=" 12 | n"): 13 | ``` 14 | Converts raw text (a string) into lines that can be processed by the functions in this module. 15 | 16 | Notes: 17 | 18 | * text is the text to convert into lines (basically, a sequence of strings). 19 | * linedelimiter (optional, defaults to " 20 | n"): The string denoting an EOL (" 21 | n" on Unix, " 22 | r 23 | n" on Windows). 24 | * Raises TypeError on bad parameters. 25 | * Returns a sequence of strings; each element is a line, with newlines removed. 26 | 27 | 28 | ``` 29 | def textops_grep(match, lines, exclude=False, case_sensitive=True): 30 | ``` 31 | Return a subset of lines matching (or not matching) a given string. 32 | 33 | Notes: 34 | 35 | * match is the string to be match. 36 | * lines are the lines to filter. 37 | * exclude (optional, defaults to false). If false, include lines matching 'match'. If true, include lines not matching 'match'. 38 | * case_sensitive (optional, defaults to true). If false, ignore case when comparing 'match' to the lines. 39 | * Raises TypeError on bad parameters. 40 | * Returns a subset of the original lines. 41 | 42 | 43 | ``` 44 | def textops_cut(lines, delimiter=" 45 | t", characters=None, fields=None, complement=False, only_delimited=False, output_delimiter=None): 46 | ``` 47 | Perform the same operations as GNU coreutils' cut. 48 | 49 | Notes: 50 | 51 | * lines are the lines to perform a cut on. 52 | * delimiter (optional). Field delimiter. Defaults to " 53 | t" (tab). 54 | * characters (optional). Characters selector. Used to select some subset of characters in the lines. Should be a sequence argument. Caller must use one of characters or fields; not both. 55 | * fields (optional). Fields selector. Used to select some subset of fields in the lines. Should be a sequence argument. Caller must use one of characters or fields; not both. 56 | * complement (optional). Invert which characters or fields get selected. Defaults to False. 57 | * only_delimited (optional). When selecting fields, only include lines containing the field delimiter. 58 | * output_delimiter (optional). When selecting fields, join fields with this (defaults to the input delimiter). 59 | * Raises TypeError on bad parameters. 60 | * Returns the cut lines. 61 | 62 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/time.repy.md: -------------------------------------------------------------------------------- 1 | # time.repy 2 | 3 | A include file to tie the time interface together. Contains no functions. 4 | 5 | ### Includes 6 | 7 | [wiki:SeattleLib/ntp_time.repy](ntp_time.repy.md) 8 | 9 | [wiki:SeattleLib/tcp_time.repy](tcp_time.repy.md) 10 | 11 | [Back to Time](Time.md) 12 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/time_interface.repy.md: -------------------------------------------------------------------------------- 1 | # time_interface.repy 2 | 3 | Provide a framework to run any implementation of a ntp time service that follows the interface provided here. Any implementation must provide update method that takes a localport as an argument. Implementers will set a mapping to their functions by calling time_register_method. 4 | 5 | To use this module, first make a call to time_updatetime(localport), where localport is a valid UDP port that you can send and receive on (note that this port may not be used depending on the implementation). Then, to get the actual time, call time_gettime() which will return the current time (in seconds). time.repy will attempt to use the update method of any impelemntor included. If none are included or if they all fail an exception is thrown 6 | 7 | ### Functions 8 | 9 | ``` 10 | time_register_method(imp_name,update_method) 11 | ``` 12 | Allow an implementation to register its update method with time.repy. 13 | 14 | 15 | Notes: 16 | * imp_name, the name or unique abbreviation of the implementation update_method, a time update_method 17 | 18 | ``` 19 | time_updatetime(localport) 20 | ``` 21 | Obtains and stores the local time from a subset of NTP servers. Attempts to update the time with each implementation provided until one succeeds or they all fail. 22 | 23 | 24 | Notes: 25 | * localport is the local port that MAY be used when contacting the NTP server(s). Consider this port a hint and not a rule. 26 | 27 | ``` 28 | time_settime(currenttime) 29 | ``` 30 | Sets a remote time as the current time. 31 | 32 | 33 | ``` 34 | time_gettime() 35 | ``` 36 | Gives the current time in seconds by calculating how much time has elapsed since the local time was obtained from an NTP server via the time_updatetime(localport) function. 37 | 38 | 39 | Notes: 40 | * TimeError is raised when time_updatetime(localport)has not previously been called or when time_updatetime(localport) has any unresolved TimeError exceptions. 41 | 42 | ### Usage 43 | ? 44 | 45 | 46 | 47 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/uniqueid.repy.md: -------------------------------------------------------------------------------- 1 | # uniqueid.repy 2 | 3 | This is a simple module which provides an unique integer id for each function call. This exists to reduce redundancy in other libraries. 4 | 5 | This service can be utilized directly, but is used in contexts like [parallelize.repy](parallelize.repy.md). 6 | 7 | NOTE: This will give unique ids PER FILE. If you have multiple python modules that include this, they will have the potential to generate the same ID. 8 | 9 | ### Functions 10 | 11 | ``` 12 | def uniqueid_getid(): 13 | ``` 14 | Return a unique ID in a threadsafe way 15 | 16 | ### Usage 17 | 18 | ``` 19 | request_id = uniqueid_getid(); 20 | ``` 21 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/urlparse.repy.md: -------------------------------------------------------------------------------- 1 | # urlparse.repy 2 | 3 | Provides utilities for parsing URLs, based on the Python 2.6.1 module urlparse. See http://docs.python.org/library/urlparse.html for more details. This modules is used extensively in the networking section. 4 | 5 | ### Functions 6 | 7 | 8 | ``` 9 | def urlparse_urlsplit(urlstring, default_scheme="", allow_fragments=True): 10 | ``` 11 | Parse a URL into five components, returning a dictionary. This corresponds to the general structure of a URL: scheme://netloc/path;parameters?query#fragment. The parameters are not split from the URL and individual componenets are not separated. Only absolute server-based URIs are currently supported (all URLs will be parsed into the components listed, regardless of the scheme). 12 | 13 | Notes: 14 | 15 | * default_scheme. Optional: defaults to the empty string. If specified, gives the default addressing scheme, to be used only if the URL does not specify one. 16 | * allow_fragments. Optional: defaults to True. If False, fragment identifiers are not allowed, even if the URL's addressing scheme normally does support them. 17 | * Raises ValueError on parsing a non-numeric port value. 18 | * Returns 19 | 20 | ``` 21 | A dictionary containing: 22 | 23 | Key Value Value if not present 24 | ============================================================================ 25 | scheme URL scheme specifier empty string 26 | netloc Network location part empty string 27 | path Hierarchical path empty string 28 | query Query component empty string 29 | fragment Fragment identifier empty string 30 | username User name None 31 | password Password None 32 | hostname Host name (lower case) None 33 | port Port number as integer, if present None 34 | 35 | ``` -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/vessellookup.repy.md: -------------------------------------------------------------------------------- 1 | vessellookup.repy is designed to give repy programmers an easy way to look up various resources that a vessel/VM has. These resources include but are not limited to max CPU percentage, max memory usage, allowed ports, etc. 2 | 3 | 4 | 5 | ## API 6 | All functions in this module take a single vessellocation as an argument. A vessellocation is a string that describes where a VM can be found. It is in the form of ip:port:VMname. Example: 123.45.67.89:1224:v4. The resources file is retrieved the first time a lookup is performed on a VM. The data from this initial retrieval is cached so that subsequent calls to are fast. 7 | 8 | ### lookup_cpu() 9 | Returns the CPU percentage given to the VM, as a float. 10 | 11 | ### lookup_disk() 12 | Returns the number of bytes of disk space allocated to the VM, as an int. 13 | 14 | ### lookup_memory() 15 | Returns the number of bytes of memory allocated to the VM, as an int. 16 | 17 | ### lookup_ports() 18 | Returns the list of ports that this VM is allowed to listen on, as a list of ints. 19 | 20 | ## Usage Example 21 | First, you need to generate a list of vessellocations through seash and upload this file to the VMs. 22 | ```sh 23 | username@ !> on %all show vessellocation to vessellocations.txt 24 | username@ !> on %all upload vessellocations.txt 25 | ``` 26 | 27 | And then in your program file, you can pass the nodelocations in the file to the lookup functions: 28 | ```repy 29 | vessel_comm_port = {} 30 | vessel_result_port = {} 31 | 32 | max_disk_space = 2 ** 32 # Assume 4GB, actual allowed disk space is less 33 | vessellocations = open('vessellocations.txt', 'r').read() 34 | 35 | for vessel in vessellocations: 36 | # This is the first time we contacted this vessel/VM, so the results will be 37 | # cached for future usage. 38 | vessel_comm_port[vessel] = lookup_ports(vessel)[0] 39 | # We already have this information so this takes no time at all 40 | vessel_result_port[vessel] = lookup_ports(vessel)[1] 41 | if lookup_disk(vessel) < max_disk_space: 42 | max_disk_space = lookup_disk(vessel) 43 | 44 | ``` -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/xmlparse.repy.md: -------------------------------------------------------------------------------- 1 | # xmlparse.repy 2 | 3 | Provide a relatively basic but usable XML parsing functionality for repy. 4 | 5 | # Functions 6 | 7 | ``` 8 | def xmlparse_parse(data) 9 | ``` 10 | Parses an XML string into an xmlparse_XMLTreeNode containing the root item. Returns the xmlparse_XMLTreeNode tree. 11 | 12 | Note: 13 | 14 | * data is the XML data to be parsed. 15 | * Throws an xmlparse_XMLParseError if parsing fails. 16 | 17 | 18 | ``` 19 | class xmlparse_XMLTreeNode: 20 | ``` 21 | Provide a simple tree structure for XML data. xmlparse_parse turns XML data 22 | 23 | # Usage 24 | 25 | ``` 26 | node = xmlparse_parse("") 27 | ``` -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/xmlrpc_client.repy.md: -------------------------------------------------------------------------------- 1 | # xmlrpc_client.repy 2 | 3 | This module implements the client-side XML-RPC protocol. See http://en.wikipedia.org/wiki/XML-RPC for more details. A programmer may use this module to initiate a communication between the client (via a XML-RPC HTTP request). If the protocol is properly initiated, then the client will then return one or more values through XML, which can then by parsed by the [wiki:SeattleLib/xmlparse.repy] or manipulated at will. 4 | 5 | Note that this module includes several other XML modules in the library. 6 | 7 | ### Functions 8 | ``` 9 | def send_request(self, method_name, params, timeout=None): 10 | ``` 11 | Send a XML-RPC request to a XML-RPC server to do a RPC call. 12 | 13 | Notes: 14 | 15 | * method_name is the method name which of the caller chooses. 16 | * params are the parameters that are passed on. 17 | * Throws socket.error on socket errors, including server timeouts. 18 | * Throws xmlrpc_common_Fault on a XML-RPC response fault. 19 | * Throws xmlrpc_common_XMLParseError on a XML-RPC structural parse error 20 | * Throws xmlparse_XMLParseError on a general XML parse error. 21 | * Throws xmlrpc_common_ConnectionError on unexpected disconnects. 22 | * Throws xmlrpc_common_Timeout if the time limit is exceeded. 23 | * Will return "values" sent by the specified client. 24 | 25 | ### Example Usage 26 | 27 | ``` 28 | client = xmlrpc_client_Client("http://phpxmlrpc.sourceforge.net/server.php") 29 | print client.send_request("examples.getStateName", (1,)) 30 | ``` 31 | 32 | ### Includes 33 | [wiki:SeattleLib/urlparse.repy] 34 | 35 | [wiki:SeattleLib/httpretrieve.repy] 36 | 37 | [wiki:SeattleLib/xmlrpc_common.repy] 38 | 39 | 40 | -------------------------------------------------------------------------------- /Programming/SeattleLib_v1/xmlrpc_server.repy.md: -------------------------------------------------------------------------------- 1 | # xmlrpc_server.repy 2 | 3 | Provide a usable XML-RPC server interface for RePy code. This module aims to be as similar in behavior to the Python SimpleXMLRPCServer as is possible with RePy. This module helps to implement the client-side XML-RPC protocol. See http://en.wikipedia.org/wiki/XML-RPC and http://docs.python.org/library/simplexmlrpcserver.html for more details. 4 | 5 | A programmer may use this module to initiate a new instance server. xmlrpc_server contains all the basic functionality of a server. 6 | 7 | Note that this module uses [wiki:SeattleLib/xmlrpc_common.repy], which means programmers should really include this file. This server can be used in variety of ways, making this more useful than the common functions alone. 8 | 9 | ### Classes & Functions 10 | ``` 11 | class xmlrpc_server_SimpleXMLRPCServer: 12 | ``` 13 | Provide a simple server-side API for programs wishing to expose their functions to an XMLRPC client. 14 | 15 | Notes: 16 | 17 | * Raises ValueError if the host part of the address passed to the constructor is a hostname that resolves to more than one ip. 18 | 19 | ``` 20 | def register_function(self, function, name): 21 | ``` 22 | Register a callback function with this XMLRPC server. 23 | 24 | Notes: 25 | 26 | * function is the function to expose to the XMLRPC client. 27 | * name is the name for this function that the client will call. 28 | 29 | ``` 30 | def serve_forever(self): 31 | ``` 32 | Makes the server start serving requests. Returns control of the thread to the calling code. 33 | 34 | ``` 35 | def shutdown(self): 36 | ``` 37 | Shuts down the server (tells the serving loop to break). 38 | 39 | ### Example Usage 40 | 41 | ``` 42 | # create a server object 43 | server = xmlrpc_server_SimpleXMLRPCServer(("localhost", 12345)) 44 | # register a function 45 | server.register_function(pow) 46 | # wait for clients to connect and call the function 47 | server.serve_forever() 48 | ``` 49 | 50 | ### Includes 51 | [wiki:SeattleLib/xmlparse.repy] 52 | 53 | [wiki:SeattleLib/xmlrpc_common.repy] 54 | 55 | [wiki:SeattleLib/urllib.repy] 56 | -------------------------------------------------------------------------------- /Programming/SecurityLayers.md: -------------------------------------------------------------------------------- 1 | # Security Layer Construction 2 | 3 | This page describes how to install Seattle so that it will use custom security layers to impose additional restrictions on API calls. 4 | 5 | ---- 6 | 7 | 8 | 9 | ---- 10 | 11 | 12 | 13 | ## Setting up Seattle 14 | ---- 15 | 16 | First pick which security layers you want to use. One of the layers (probably the last one) should be private_hideprivate_layer.repy, which prevents user programs from accessing files starting with "private_". The node manager will prevent the user from remotely using files starting with "private_", so this creates a protected namespace that can be used for the security layers. The node manager will also preserve files starting with "private_" when a node is reset. 17 | 18 | You must create a directory for all files which need to be copied to new VMs. This directory should include private_encasementlib.repy, private_wrapper.repy, the scripts for any security layers, and any other files the layers need to function. All of these files should start will "private_" so that they are not visible to the user. 19 | 20 | When installing Seattle, you must include specify the --repy-prepend and --repy-prepend-dir flags. The --repy-prepend directive will be prepended to any repy program run by the user. This should start with "private_encasementlib.repy" and be followed by a list of security layers. You should use --repy-prepend-dir to specify the directory you created for the security layer files. 21 | 22 | For example, 23 | ``` 24 | python seattleinstaller.py --repy-prepend-dir security_layers --repy-prepend "private_encasementlib.repy private_custom_layer private_hideprivate_layer.repy" 25 | ``` 26 | would install Seattle and cause it to copy all the files in the security_layers directory to new VMs and run user repy programs inside of the security layers. If the user program calls part of the repy API, any file operation will first be sanitized by private_hideprivate_layer.repy to make sure the user can't access to private files, and then any call overridden in private_custom_layer will be passed through that security layer. 27 | 28 | 29 | 30 | 31 | ## Writing Security Layers 32 | ---- 33 | 34 | To create a custom security layer, [browser:seattle/trunk/seattlelib/private_hideprivate_layer.repy private_hideprivate_layer.repy] is a good starting point for seeing how they are constructed, and the comments in [browser:seattle/trunk/seattlelib/private_wrapper.repy private_wrapper.repy] provide additional information on defining contexts. -------------------------------------------------------------------------------- /Scripts/README.md: -------------------------------------------------------------------------------- 1 | # `auto_grader.py` 2 | > **Automated Grading for Repy V2 Assignments** 3 | 4 | `auto_grader.py` is a Python-based tool designed to streamline the grading 5 | process for defense and attack programs written in [Repy 6 | V2](https://github.com/SeattleTestbed/repy_v2). By automatically running each 7 | attack against every defense, it produces comprehensive results in a CSV format. 8 | 9 | ## Description 10 | With `auto_grader.py`, instructors can effortlessly assess the efficacy of 11 | students' defense mechanisms in the face of potential attacks. For every attack 12 | that succeeds in compromising a defense, the resulting CSV will register a `1`; 13 | otherwise, it will display a `0`. An attack is considered successful if it 14 | generates an output or raises an error, denoting the failure of the defense 15 | layer. It also handles timeouts, ensuring that the script does not hang in the 16 | event of an infinite loop. 17 | 18 | ## Prerequisites 19 | - Python 2.7 installed on your machine. 20 | - Repy V2 environment setup. 21 | - Copy the required files (mentioned below) in the script's directory 22 | - `repy.py` 23 | - `restrictions.default` 24 | - `wrapper.r2py` 25 | - `encasementlib.r2py` 26 | 27 | ## Usage 28 | ```bash 29 | python auto_grader.py defense_folder_path attack_folder_path temp_target_folder_path 30 | ``` 31 | where: 32 | - `defense_folder_path` is the path to the folder containing defense programs. 33 | - `attack_folder_path` is the path to the folder containing attack programs. 34 | - `temp_target_folder_path` is the path to the temporary target folder. 35 | 36 | ## Naming Conventions 37 | - Attack programs should be named as: "`[studentid]_attackcase[number].r2py`". 38 | - Defense programs should start with the name 39 | "`reference_monitor_[studentid].r2py`". 40 | 41 | For example, if the student id is `abc123`, the defense program should be named 42 | as `reference_monitor_abc123.r2py` and the attack program should be named as 43 | `abc123_attackcase1.r2py`, `abc123_attackcase2.r2py`, etc. 44 | 45 | ## Output 46 | Two CSV files are generated: 47 | 1. `All_Attacks_matrix.csv`: Contains the result of every attack program against 48 | each defense. 49 | 2. `All_Students_matrix.csv`: Indicates which students successfully attacked a 50 | defense. 51 | 52 | ## Notes 53 | - Students are instructed to generate output or raise an error in their attack 54 | program only when they successfully compromise the security layer. 55 | - Ensure the correct environment, naming conventions, and directory structures 56 | are adhered to for successful script execution. 57 | 58 | ## Contributing 59 | For modifications, improvements, or any issues, please open a pull request or 60 | issue. 61 | 62 | ## Credits 63 | `auto_grader.py` is the brainchild of 64 | [@Hooshangi](https://github.com/Hooshangi). 65 | 66 | For further details, potential contributions, or to view the code, visit 67 | [Hooshangi/Grading-script](https://github.com/Hooshangi/Grading-script). 68 | -------------------------------------------------------------------------------- /UnderstandingSeattle/AcceptableUsePolicy.md: -------------------------------------------------------------------------------- 1 | # Acceptable Use Policy 2 | 3 | For the most part, the things that you shouldn't do aren't supported by the API. You're welcome to use our code for commercial projects, etc. Please see [our license (MIT)](../LICENSE) for details. 4 | 5 | Our use policy is roughly modeled on the [PlanetLab Acceptable Use Policy](http://www.planet-lab.org/aup). The main things to know are: (1) no illegal activity, (2) don't try to hack into Seattle user machines (if you want to test a vulnerability, let us know, we have machines set aside for this), and (3) even if the system doesn't have outgoing or incoming IP / port restrictions, don't abuse this. Basically, it boils down to treating the system as though you're a network admin and it's on your network. 6 | -------------------------------------------------------------------------------- /UnderstandingSeattle/Architecture.md: -------------------------------------------------------------------------------- 1 | ## Nodemanager / Repy Architecture 2 | This diagram illustrates the multi-process/multi-thread architecture of a Seattle node executing Repy code in a sandbox. 3 |
4 | *Note: The nodemanager is shown here running in `--foreground` mode. In a default install, it runs without that option and thus daemonizes (i.e. it `forks` twice to detach from its ancestor processes and any controlling TTY).* 5 |


6 | 7 | ![Multi-process/-thread Architecture of Nodemanager and Repy sandbox](https://github.com/lukpueh/docs/raw/multi-process-thread-arch/ATTACHMENTS/Architecture/nm_repy_arch.png) 8 | -------------------------------------------------------------------------------- /UnderstandingSeattle/CodeSafety.md: -------------------------------------------------------------------------------- 1 | # Code Safety 2 | 3 | Maintaining a safe environment for execution of untrusted Repy code is fundamental to Seattle. 4 | Seattle securely isolates Repy programs in multiple ways, including performing analysis of the 5 | program's abstract syntax tree, maintaining a small and secure API through which programs can 6 | interact with the outside world, and implementing strong isolation between untrusted code and the 7 | rest of the execution environment. 8 | 9 | ## Program Analysis 10 | 11 | When Seattle runs any Repy program, the program is first parsed and its syntax tree analyzed. 12 | As Repy is largely a subset of Python, we leverage the Python interpreter but disallow anything in 13 | Python that could allow the program direct access system resources. We therefore look through 14 | the program's syntax tree to ensure that only the Python functionality we have specifically allowed exists 15 | in the program. If Seattle encounters anything forbidden, it refuses to run the program. 16 | 17 | ## Secure API 18 | 19 | As we've stripped away any other method by which the program can access resources such as the 20 | computer's hard drive and networking, in order to make Repy a useful programming environment we 21 | need to add a few things back in. What we add in, however, is a clean and minimal API for using 22 | a restricted subset of the computer's resources. As all access to these resources goes through 23 | our API, we can ensure that only allowed resources are accessed as well as limit the frequency 24 | and quantity of any resource that a program uses. 25 | 26 | ## Code Isolation 27 | 28 | As our safe API runs in the same Python interpreter as the untrusted program code, we must ensure that 29 | the untrusted code has no way to manipulate the trusted code or access privileged functionality 30 | that only our own trusted API code should be able to access. We do this in multiple ways. First, 31 | we leverage Python's ability to execute code in a separate context from the rest of the program. 32 | By doing this, we have complete isolation but a program that can't access our trusted API at all. 33 | From there, we then provide the untrusted program access to our API functions. 34 | 35 | In fact, we also take extra precautions at the point where we provide access to our API functions. 36 | Instead of providing direct access, we provide access to function wrappers that, when called, 37 | carefully check arguments to the program as well as ensuring that data returned from the function 38 | is the correct type of data (to protect against, for example, leaking object references from trusted 39 | to untrusted contexts). These wrapper functions also further wrap any object instances returned 40 | by those function, such as objects that represent files, sockets, and locks. This ensures that methods 41 | called on those objects have the same namespace protection as the API functions themselves. -------------------------------------------------------------------------------- /UnderstandingSeattle/DemoVideo.md: -------------------------------------------------------------------------------- 1 | # Demo Video 2 | This five-minute demo video should help get you acquainted with the [wiki:WikiStart Seattle project]. If you are unable to watch the video below, you may [download the video](https://seattle.poly.edu/static/demo.mov) to your computer. 3 | ```html 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | alt : Seattle Demo Video 13 | 14 | 15 | 16 | 17 | ``` -------------------------------------------------------------------------------- /UnderstandingSeattle/Privacypolicy.md: -------------------------------------------------------------------------------- 1 | # Seattle Privacy Policy 2 | 3 | 4 | This privacy policy sets out how Seattle uses your private information. The Seattle program does not view your private information from other applications in any way. Since the Seattle programs run in the background (without your interaction), you do not provide private information to Seattle. 5 | 6 | Your computer or smartphone will transmit its current IP to a publicly accessible server periodically to allow access to students, researchers, and developers. If you install an application that can act as a [wiki:UsingSensors Seattle Sensor], it will have a privacy policy you should also view. 7 | --------------------------------------------------------------------------------