galbotrix Posted August 26, 2011 Posted August 26, 2011 Hi guys, I hope to get your valuable inputs to this pet project of mine, please do feel free to mention your ideas, suggestions and recommendations for the same. This is my personal project without any academic monitoring so I am definitely looking for some guidance from your experience. I've collected a huge number of memory traces almost 10 GB of data. These memory traces were gathered from a set of servers, desktops, and laptops in a university CS Department. Each trace file contains a list of hashes representing the contents of the machine's memory, as well as some meta information about the running processes and OS type. The traces have been grouped by type and date. Traces were recorded approximately every 30 minutes, although if machines were turned off or away from an internet connection for a long period, no traces were acquired. Each trace file is split into two portions. The top segment is ASCII text containing the system meta data about operating system type and a list of running processes. This is followed by binary data containing the list of hashes generated for each page in the system. Hashes are stored as consecutive 32bit values. There is a simple tool called "traceReader" for extracting the hashes from a trace file. This takes as an argument the file to be parsed, and will output the hash list as a series of integer values. If you would like to compare to traces to estimate the amount of sharing between them, you could run: ./traceReader trace-x.dat > trace-all ./traceReader trace-y.dat >> trace-all cat trace-all | sort | uniq -c This will tell you the number of times that each hash occurs in the system. Now my idea is to take the trace for every interval (every 30 mins) for each of the systems and find the frequency of each memory hash. I then plan to collect the highest frequencies (hashes maximally occurring) of the entire hour (60 mins) and then divide the memory into 'k' different patterns based on the count of these frequencies. Like for instance say hashes 14F430C8 ,1550068, 15AD480A, 161384B6, 16985213, 17CA274B, 18E5F038 and 1A3329 have the highest frequencies then I might divide the memory into 8 patterns (k=8). I plan to use the Approximate Nearest neighbor algorithm (ANN) http://www.cs.umd.edu/~mount/ANN/ for this division. In ANN one needs to provide a set of query points, data points and dimensions. I guess in my case my query points can be all the remaining hashes other than the highest frequency ones, the data points are all the hashes for the hour and dimension can be 1. I can thus formulate the memory patterns for every hour, I then plan to formulate memory patterns for every 3 hrs, 6 hrs, 12 hrs and finally all the 24 hrs. Armed with these statistics, I plan to compare the patterns based on the time of the day. I hope to provide certain overlap with the patterns and create what I call as "heat zones" for memory based on the time of the day and finally come up with a suitable report for the same. The entire objective of this project is to provide a sort of relation between the memory page access and the interval of time of the day. So for specific intervals there are certain memory "heat zones". I understand that these "heat zones" might change and may not be consistent with every system and user. The study here intends to only establish this relationship and doesn't do any kind of qualitative or quantitative analysis of these heat zones per system and user. The above can be considered to be an extension of this work. Please feel free to comment and suggest for any new insights
khaled Posted August 28, 2011 Posted August 28, 2011 To have a good start, you should make the following documents: - UML diagrams (Class, Object, State-chart, flow-diagram, ..) - Component-based diagram - Structural description, using the "Script" style - Record all routines as algorithms (Numbered Steps, Flow-chart, Pseudo-code ..) .. good luck,
Cap'n Refsmmat Posted August 28, 2011 Posted August 28, 2011 Do the hashes represent the contents of the memory or simply the addresses of active pages? Does the system operate at a level where it would be affected by address space layout randomization (which most modern operating systems perform)? I also don't understand why you need an approximate nearest-neighbor algorithm when your data is 1-dimensional.
galbotrix Posted August 29, 2011 Author Posted August 29, 2011 The hashes represent the contents of all memory at the time of recording the trace. IMO address space randomization may affect virtual memory based recordings but in my case the traces are the dumps of actual physical memory. So I guess it may not be directly affected, but I might still verify as the traces are being read by "traceReader" tool. For 1 dimensional collections of data points, can any one suggest a better pattern classifying algorithm ? I mean as mentioned earlier the points are nothing but collections of all the hashes during the interval and I need to divide these into groups based on the top frequencies of that particular interval set - say 1 hr. Is there any other way for this classification ? I would highly appreciate your inputs.
galbotrix Posted August 30, 2011 Author Posted August 30, 2011 As pointed out there are two aspects to this project. 1. Find out about the processes running most frequently at a particular time interval on different systems (this may be an easier option) 2. Go deeper to the physical memory(PM) trace and find the relationship between the PM addresses and most frequent access per universal time clock per system. I understand that with address space randomized mappings and with different systems running different processes it might be very hard to find any suitable pattern emerging from this study. But as most of us know that identical systems belonging in a particular network and during a time frame might end up accessing similar PM blocks. (A block here being groups of pages) I intend to find if there is any kind of correlation between this time frame and the access. According to the working set model of a system, there exits a temporal and spatial locality of memory page access and hence we end up using the appropriate page replacement algorithms. Now I intent to see if this same analogy can be applied to the entire memory address space for access. I mean if there exists some sort of a pattern emerging for physical memory access based on time and space. I hope to know if there has been any similar work done before with memory traces or if there are any other areas which I need to look into before I can begin this study.
khaled Posted August 31, 2011 Posted August 31, 2011 In any system analysis project, there are two main parts .. Observation, and Analysis, So, there is the code that does the analysis on data you have ... And, there is the code that can fetch needed data from potential sources ... .. I didn't quite understand what is the work, but good luck
galbotrix Posted September 2, 2011 Author Posted September 2, 2011 I am trying to research about some past work done on Memory Vs Time pattern analysis. I've compiled together a collection of papers on memory pattern analysis. But most of them talk in terms of page sharing. I'm sharing the link and inviting you all to join this group. http://www.mendeley....lysis-research/ (Please send a pm with your email id to join the group) Hope to find something useful in there. I would also highly appreciate if any of you could point out any useful work done esp in terms of Memory Vs Time pattern analysis. I want to know if there has been any specific work done for memory scanning/mapping in terms of actual real time.
hitesh.inception Posted April 17, 2013 Posted April 17, 2013 How can we trace memory activities in trace file?
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now