ETHERNET SIMULATION PROJECT FAQ You should treat this as a project, not a homework. This means that you are expected to dig down somewhat, understand what is going on, read a few papers, look through manuals, experiment with several/many things to improve your understanding. You will not be told EXACTLY what to do. Below are the answers to some questions we got. -------------------------------------------------------- Q. What does 'h' really mean in the trace? A. 'h' stand for a routing hop. The packet is generated at the node at this 'h' event. There is a link connecting the node to the ethernet interface with a delay of 0.1ms (see the tcl file and consult the manual if necessary). After this 0.1ms, the packet is enqueued (+ event) at the interface buffer. Then it is dequeued (- event). Then it is received (r event). Of course, the packet could be dropped (d event) right after enqueue. The 'h' event and the 0.1ms link are not relevant in our project. -------------------------------------------------------- Q. Why does a node that should not be there (node 8 if you run etherlan.tcl unmodified) keep on appearing? A. The node 8 is sort of a virtual node modeling the ethernet itself. As if you send the packet to the network and receive from the network. If there is no receive event after a dequeue, this means that the packet is dropped in the ethernet. This can happens (rarely) for exceeding max number of retransmissions when the load is very high. ------------------- Q. My question is that is the delay between "-" and "r" propagation delay plus transmission delay? or it is only the propagation delay? A. The delay between "-" and "r" is propagation delay plus transmission delay, plus added all protocol related delays, i.e., backoff. "-" is when the first bit of the packet is first transmitted, "r" is when the last bit of the packet is finally received. In between it might have been transmitted (partially) multiple times, due to collisions and has undergone backoffs. These aspects are not recorded in trace. (You will need to write ns2 code to record these aspects, if you are so inclined). ------------------------------------------ Q. I run the same "etherlan.tcl" twice and found that this two output trace files are totally the same! I am confused because according to CSMA/CD, when a node senses busy, it will "random" back- off a period of time. In our case, it put the frame into the FIFO buffer and when buffer is full, it drops the packet. Since the frame will have a random backoff time, should there be at lease some differences in the time event? A. To understand why this happen, you will need to know how random numbers are generated in computers after all. They are actually pseudo-random and start from a seed. Given the same seed, it will generate the same sequence of random looking numbers. By default it is the same seed in ns2. This is useful for debugging. If you use the following two commands before the simulator object is created, the seed will be set to the current clock. Then you will get different results in each run. global defaultRNG $defaultRNG seed 0 For the project, I think a single long run for each data point is fine, long enough to get stable stats. we discussed this in class. -------------------------------------------- Q. Our group have trouble understanding what kind of analysis you want us to do in Part 2 of the project. A. Huh! I think that there is enough pointer to this in the project description. --------------------------------------------- Q. Packets larger than 1000 bytes are being split. A. By default, at least in some version of ns, the UDP agent has a default (implicit) max packet size of 1000 (e.g. Agent/UDP set packetSize_ 1000). This causes packets greater than that size to be split to accommodate this constraint. To fix this, simply add the following line to the script file: Agent/UDP set packetSize_ 2000 ----------------------------------------------- Q. Are we expected to change the tcl file? A. Of course. I don't think you will be able to do the project without at least changing various parameters. --------------------------------------------- Q. According to the project description, the unit of aggregate throughput and aggregate offered load is Mb/sec. Here does Mb mean Mega bits or Mega bytes?? Or should we consider packets per second? A. I think it is universally accepted that 'b' means bits and 'B' means bytes. I would rather stick to Mb/s and not use packets/sec, as you will vary packet sizes in some cases. Also, you could follow the referenced papers to see what they did. ------------------------------------------------- Q. In order to find out the aggregate throughput we require the total time. Shall we consider this time as the finish time (when finish procedure is called), or should we consider the time when "cbr stop" is called, or should we consider the time when the last packet was received ?? A. This is a good question, as I needed to think a little about this. But nothing you cannot figure out youself. Packet transmissions stop when the cbr source stops. The last packet will be received a little time after that. The simulator is intentionally stopped at a much later time, as when writing the tcl code, you don't really know when the last packet will be received and you don't want to stop the simulator before then. So long as you run this long enough, you can use either cbr stop time or the last packet time. The diff should be so small relative to the run time that it won't matter. ----------------------------------------- Q. In order to compute the 'buffer size vs drop probability' graph, are there any limits for the buffer size? In the sample script it is set to 10. So is it alright if we take it from 5 to 25 in steps of 1? A. Frankly I don't know. When X changes because of Y, it is best to vary Y over a wide enough range so that significant variation of X also over a wide range is observed. Step size is only important so that any graph may look smooth. This is a general guideline for any such experiment. ------------------------------------------------- Q. How do I change host reset time? A. Use the default. I "think" (haven't looked into the code) it is 0 in ns2 implementation. ------------------------------------- Q. I am trying to understand the concept of run length which is used to reflect the unfairness of Ethernet. I think it is the number of consecutive packets sent from the same source. For example, the destination received a set of packets with the source addresses as 5 5 5 3 3 1 1 2 5 5. The run length is 3 - 2 - 2- 1 - 2. The average is (3+2+2+1+2) / 5. However, when I applied this method to the trace file, the average is about 3 and the max value is 6. It is not a significance of the unfairness. However, the simulation results about the acquisition probability on the same trace file does reflect the unfairness. So is my method of calculating the run length correct? A. The technique seems fine. I cannot immediately tell why is it that if you present unfairness one way (based on MRU stack) it looks different than if you present in another way (using run length) based on the SAME data. If there is a lot of prob mass at MRU stack = 1, this means that run lengths should be long. However, more prob mass at MRU stack = 2 or 3 may not reflect in run lengths. This still means unfairness, however. You need to look into or analyze your data carefully to make sure you understand what is really happening. ----------------------------------------------------------- Q. For calculating, the Acquisition probability vs the MRU stack graph (also the run lengths), we need the TRANSMISSION sequence of the packets on the wire that are different from the dequeue events. The trace file which gets generated only has the dequeue event, not the transmission event. How do we get that in the trace file ? A. The trace file has receive events. That should be all you need. -------------------------------------------------------