Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] Cs3650 project 1-shell p0

Starter code: See for the Github Classroom link. Submission: This is a pair assignment, but you can work alone, if you so choose. Submit the contents of your repository via Gradescope. See Deliverables below for what to submit. If you are working with a partner, do not forget to include their name with the submission. Team registration: Use the Google form accessible via this project’s to register your partner or that you want to work alone. If you are registering a partner, only 1 of the people in the team needs to register. Teams can have at most 2 people. Do not use login.khoury.northeastern.edu to work on this assignment. Use your XOA VM or a local Linux environment, if you have one. The first project of this class, is to write a This project is more involved than the previous assignments and requires more planning as well as programming. You are asked to develop a moderately complex piece of C code from scratch. Start early with planning, experimenting, and prototyping. Part 1: Tokenizer & Basic Shell The first part of this assignment is to get a basic working shell. For this, we will first need to develop a tokenizer, that is, a piece of code that helps us split up an input line into meaningful tokens. The second task is to develop a basic shell that uses this tokenizer to process input from the user. Task 1.1: Shell Tokenizer Before we can execute commands (or combination of them) we need to be able to process a command line and split it into chunks (lexical units), called The input of a tokenizer is a string and the output is a list of tokens. Our shell will use the tokens described in the table below. The tokens (, ), , ;, |, and the whitespace characters (space ‘ ‘, tab ‘ ‘) are special. Whitespace is not a token, but might separate tokens.Token(s) Description / Meaning() Parentheses allow grouping shell expressions < Input redirection > Output redirection Token(s) Description / Meaning ; Sequencing | Pipe “hello < (world;” Quotes suspend the meaning of special characters (spaces, parentheses, . . . ) ls Word (a sequence of non-special characters) Your first task is to write a function that takes a string (i.e., char * in C) as an argument and returns a list (array, linked list, etc.) of tokens. The maximum input string length can be explicitly bounded, but needs to be at least 255 characters. You also need to provide a demo driver, tokenize.c that will showcase your function. The driver should read a single line from standard input and print out all the tokens in the line, one token per line. For example: $ echo ‘this < is > a demo “This is a sentence” ; “some ( special > chars”‘ | ./tokenize this < is > a demo This is a sentence ; some ( special > chars In this example, we print the example string to standard output, but immediately that output into the input of the tokenize program. We will implement piping in our own shell in the next assignment. Whitespace that is not in quotes should not be included in any token. To help you get started writing a tokenizer, see the example included with the starter code. What we are implementing here is a which is a recognizer for While not necessary to complete this assignment, you might want to read up on those to get a deeper understanding if you are interested. Task 1.2: Basic Shell Example interaction: $ ./shell Welcome to mini-shell. shell $ whoami ferd shell $ ls -aF ./ .git/ shell* shell.o tokens.h vect.c vect.o ../ Makefile shell.c tokens.c tokens.o vect.h shell $ echo this should be printed this should be printed shell $ echo this is; echo a new line this is a new line shell $ exit Bye bye. Here are the requirements for the basic shell 1. After starting, the shell should print a welcome message: Welcome to mini-shell. 2. You must have the following prompt: shell $ in front of each command line that is entered. 3. The maximum size of a single line shall be at least 255 characters. Specify this number as a (global) constant. 4. Each command can have 0 or more arguments. 5. Any string enclosed in double quotes (“) shall be treated as a single argument, regardless of whether it contains spaces or special characters. 6. When you launch a new child process from your shell, the child process should run in the foreground by default until it is completed. The prompt should be printed again and the shell should wait for the next line of input. 7. If the user enters the command exit, the shell should print out Bye bye. and exit. 8. If the user presses Ctrl-D (end-of-file), the shell should exit in the same manner as above. 9. If a command is not found, your shell should print out an error message, [command]: command not found (replacing “[command]” with the actual command name), and resume execution. For example: shell $ dfg dfg: command not found shell $ 10. System commands should not need a full path specification to run in the shell. For example, issuing ls should work the same way it works in BASH and run the ls executable that might be stored in /bin, /usr/bin, or elsewhere in the system path. Part 2: Advanced Shell Features Part 2 expands on the basic shell from Part 1. You are asked to implement 4 builtin commands, as well as the following 3 operators: • Sequencing, e.g., echo one; echo two • Input redirection, e.g., sort < foo.txt • Output redirection, e.g., sort foo.txt > output.txt • Pipes, e.g., sort foo.txt | uniq Note that these operators can be combined. Follow the implementation strategy suggested below. This will give you the relative priorities of the operators. Task 2.1: Built-in Commands In addition to running programs, shells also usually provide a variety of built-in commands. Let’s implement some. The shell should support at least the following built-in commands, in addition to exit from Part 1: cd (change directory) This command should change the current working directory of the shell to the specified as the argument. Tip: You can check what the current working directory is using the pwd command (not a built-in). source Execute a script. Takes a filename as an argument and processes each line of the file as a command, including builtins. In other words, each line should be processed as if it was entered by the user at the prompt. prev Prints the previous command line and executes it again, without becomming the new command line. You do not have to support combining prev with other commands on a command line. help Explains all the built-in commands available in your shell Task 2.2: Sequencing Using ; The behavior of ; is to execute the command on the left-hand side of the operator, and once it completes, execute the command on the right-hand side. For example: “` shell $ echo Boston; echo San Francisco; echo Dallas Boston San Francisco Dallas shell $ dfg; uptime dfg: command not found 20:04:40 up 44 days, 6:14, 60 users, load average: 2.05, 1.93, 1.70 shell $ “` Task 2.3: Input Redirection Task 2.5: Pipe | The pipe operator | runs the command on the left hand side and the command on the right-hand side simultaneously and the standard output of the LHS command is redirected to the standard input of the RHS command. You do not have to support piping the output of built-ins. Deliverables Parts 1 and 2. Implement the shell in shell.c. Include any .c and .h files your implementation depends on and commit everything to your repository. Do not include any executables, .o files, or other binary, temporary, or hidden files; or any extra directories. All the functionality needs to be implemented by you, using system calls. Writing code that relies on the default shell in any form does not fulfill the requirements. The Grammar of Shell A grammar for a language specifies all the valid examples of expressions (or sentences) in that language. Our shell has the following grammar. This should help decide what is a valid command line, but also to help you structure your code. If you took Fundies, it might help to think of a grammar as a collection of related (inductive) union definitions.Shell Implementation Strategy Here’s a set of “rough and ready” guidelines for tackling the extra shell functionality. Note that each subcommand might contain other operators as well. You might want to implement sequencing or redirection first. 1. Sequencing: command1; command2 a) Split the token list on semicolon. b) Fork child A & execute command1 (recursively). c) In parent: wait for child A to finish. d) Fork child B & execute command2 (recursively). e) In parent: wait for child B to finish. 2. Pipe: command1 | command2 a) Fork child A. b) In child A: create a pipe. c) In child A: fork child B. d) In child B: hook pipe to stdout, close other side. e) In child B: execute command1. f) In child A: hook pipe to stdin, close other side. g) In child A: execute command2. h) In child A: wait for child B. i) In parent: wait for child A. 3. Redirection: command file a) Fork a child. b) In child: replace the appropriate file descriptor to accomplish the redirect. c) In child: execute command (recursively). d) In parent: wait for child to finish. Examples Here are some examples you can use to test the shell functionality. • The line echo one; echo two should print one two • Running echo -e “1 2 3 4 5” > numbers.txt; cat numbers.txt should print 1 2 3 4 5 and result in a file called numbers.txt being created in the current directory. • Running sort -nr < numbers.txt after the above, should print 5 4 3 2 1 • Running shuf -i 1-10 | sort -n | tail -5 should print 6 7 8 9 10 Going Further You might consider some of the following optional features in your shell to challenge yourself (there is no extra credit for this): 1. Switching processes between foreground and background (fg and bg commands). 2. Grouping command expressions. E.g.: ( cat prologue.txt ; ( cat names.txt | sort ) ; cat epilogue.txt ) | nl Using the Provided Makefile As before, we provide you with a Makefile for convenience. It contains the following targets: • make all – compile everything • make tokenize – compile the tokenizer demo • make tokenize-tests – run a few tests against the tokenizer • make shell – compile the shell • make shell-tests – run a few tests against the shell • make test – compile and run all the tests • make clean – perform a minimal clean-up of the source treeHints & Tips • The starter code contains an example of a tokenizer. A good start is to try to modify the example to recognize the tokens of a shell. • A very basic tokenizer can also be written using the function strtok, which provides a somewhat different approach. However, trying to handle string tokens using this approach might prove tricky. • Use the function fgets or getline to get a line from stdin. Pay attention to the maximum number of characters you are able to read. Avoid gets. • Figure out how fgets/getline lets you know when the shell receives an end-of-file. • Use the provided unit tests as a minimum sanity check for your implementation. Especially before the autograder becomes available. • Follow good coding practices. Make sure your function prototypes (signatures) are correct and always provide purpose statements. Add comments where appropriate to document your thinking, although strive to write Pick meaningful names for your functions and variables. The larger the scope of the variable, the expressive the variable name should be. • Think about and design your program in a top-down manner and split code into short functions. Leverage your knowledge of program design from previous classes. • Avoid producing A mutli-branch if-else if-else or a multi-case switch should be the only reason to go beyond 30-40 lines per function. Even so, the body of each branch/case should be at most 3-5 lines long. • Use valgrind with –leak-check=full to check you are managing memory properly. • A string vector implementation might be useful. • Avoid printing extra lines (empty or non-empty) beyond what is required above. This goes both for the tokenizer and the shell. Extra output will most likely confuse our tests and give false negatives. • man is your friend. Check out fork, open, close, read, write, dup, pipe, exec, . . .

$25.00 View

[SOLVED] Cs334/ece30834 assignment #4 – procedural-it! l-systems and basic procedural p0

ModelingSummary: The objective of this assignment is to implement a mini procedural tree modeler using L- Systems. Your assignment is to read in a simple text file containing the rotation angle A (in degrees), the number of iterations N, the axiom S (i.e., starting symbol), and rule(s) R. Then, create a string representation of a tree T by successively applying rules to the axiom, for N iterations. Finally, convert the string representation into a set of 2D line segments following “turtle drawing” logic to produce the geometry of the tree.Specifics: 1. Start with the templates from the course website. The templates support Windows and Linux environments. Compile and run the templates. Then, make your changes below. 2. Input Parsing (30%) The program will read files as in the following example: tree1.txt 25.7 6 f f : f[+f]f[-f]fThe first number is the rotation angle A, the second number is the number of iterations N, the third line is the axiom S, and all of the following lines are the set of rules R. This example only has one rule, but there can be more than one (but less than 10). Each rule must have a : symbol. The character before the : is the predecessor, and the string after the : is the successor. The predecessor must be a single character, but the successor can be of any length but on a single line of text. Whitespace should be ignored.Implement the LSystem::parse() function in lsystem.cpp, and store the relevant fields according to the comments in that function. The rules are stored in a std::map, which is a C++ associative container that maps characters, the predecessors, to strings, the successors. Refer to the C++ standard documentation to see how to use it:3. Rule Application (30%) Starting from the axiom, rules are applied to each character in the string to produce a new string. Then on the next iteration, the rules are applied to the resulting string from the last iteration, producing another string. You will implement applying these rules, given an input string, and returning a string with the rules applied. For each character in the input string, you must check if there is a rule that would replace it. If there is, append the replacement string (the successor) to the output string. If there is no rule to replace the current character, append the character itself to the output string. Write your implementation in the LSystem::applyRules() method in lsystem.cpp. There will not be any visual output until the next step is completed, so to verify the correct behavior, you can print the output string before returning it. 4. Geometry Generation (30%) Given a string with rules already applied, you will generate a set of 2D line segments to represent the geometry of the L-System using “turtle drawing” logic. Each character of the string should be interpreted as such: f,F,g,G Draw a line segment and advance forward (5%) s,S Advance forward without drawing a line segment (5%) + Rotate A degrees counterclockwise (5%) – Rotate A degrees clockwise (5%) [ Push the current draw state (5%) ] Pop the last-pushed draw state (5%) Any character not listed above should be ignored.Bracketed contexts. When a [ is encountered, the current draw state should be pushed onto a local stack. The draw state consists of the position and orientation (rotation) of the next segment to be drawn. Later, when a ] is encountered, the last draw state that was saved should be removed from the top of the stack, replacing the current draw state. The implementation details are up to you. (Hint: use matrix transformations). Write your implementation in the LSystem::createGeometry() method in lsystem.cpp. Refer to The Algorithmic Beauty of Plants, linked on the course website, for more information on the above (most of chapter 1). Some example outputs below:tree1.txt tree2.txtdragon.txt sierpinski.txt 5. Stochastic Generation (10% + Bonus 15%) Random Angle Jitter (10%) Currently, every + and – command rotates the direction by the exact same angle A. As an enhancement, introduce small random variation around A, so each turn is slightly different. This can create more “natural” or “organic” tree shapes without major design changes. Specifically, the angleJitter J is an optional number enclosed in at the end of the angle value, for example: tree1_rand_angle.txt 25.7 6 f f : f[+f]f[-f]f If no jitter value is provided (For example, tree1.txt), J is set to 0.0. Whenever you rotate, randomly offset A by a value in [-J, +J]. Stochastic Rule (Bonus 15%) In the above implementation, there is only one rule for any given predecessor. For extra credit, extend the parser and rule application to allow for multiple rules with the same predecessor. The extended syntax is below: tree_rand.txt 25.7 6 f f 0.33 : f[+f]f[-f]f f 0.33 : f[+f]f f 0.34 : f[-f]f In the rule definition, the predecessor is followed by a weight. The rule used to replace a predecessor with multiple rules is chosen randomly with a probability equal to the weight of the rule divided by the sum of all the weights of rules with that predecessor. The weights do not necessarily sum to 1. Your implementation should be backwards compatible with the non-extended syntax – that is, the weight of a rule is optional. Assume a weight of 1 if there is no weight specified. Note that you will need to make changes in several other places in order to make this work. Specifically, you will need to change how rules are stored in the class, since a std::map only allows a single value per key. 6. The functions/methods requiring your implementation will be marked TODO; however, depending on your particular implementation there might be other places for you to change code. You are expected to add/modify the code as necessary to ensure the application runs smoothly. 7. Files: Several example files are included in the template directory. You can test your solution against these example files. We have reserved others for our testing as well. Turn-in: To give in the assignment, please use Brightspace. Give in a zip file with your complete project Don’t wait until the last moment to hand in the assignment! For grading, the program will be compiled on Linux and run from the terminal (with Visual Studio as a fallback – please try to avoid platform-specific code (e.g., don’t #include )), run without command line arguments, and the code will be inspected. If the program does not compile, zero points will be given. If you have more questions, please ask on Piazza!Good luck!

$25.00 View

[SOLVED] Cs33400/ece30834 assignment #2 – gpu it! basic lighting and normals calculation p0

Objective: This assignment is designed to give you some experience with GPU ray-tracing using fragment shaders.Summary: Before getting started, as the main contents are implemented in GPU code, a time-saving trick is to dynamically load the GPU shader code into your running program. So you will be guided to apply this trick as a warm-up.For the main content, you are given a scene with a sphere, a ground plane, a sky background and a point light source. You will first have to compute surface normals for the sphere and plane. Then you need to implement the shading function that defines the surface color using Phong shading model. You will learn how to render second-order lighting effect reflection as a first step to understanding ray tracing. A bonus task is to render another second-order lighting effect shadow using ray tracing.If you implement everything correctly, you should have a view like this:Specifics: 1. Warm-up (20%) Start with the template from the course website, similar to previous assignments. Previous assignments are C++ codes, any changes need to be recompiled and then run the executive in CPU again. In this assignment, most of the codes are in the shaders/f.glsl file, which is the shading language and is executed in GPU. programming language is very similar to C. You can define struct, but cannot define class. You can define functions, global variables, etc. There is no need to rely on glm library to use data structures like vec2, vec3, vec4, mat3, mat4. Those data structures are built-in in GLSL.We implement a small trick here for efficiently developing GPU code: “hot-update” the GPU code. To achieve this, you must add one line of code in the main.cpp L-123. Later on, you just need to press r, and your modification in shaders/f.glsl can be compiled onthe-fly and executed automatically without re-opening your program if your shaders/f.glsl does not have a grammar error. To test the feature, after implementing the TODO 1, try add one line of code “outCol = vec3(1.0, 0.0, 0.0);” after “outCol = shading(ro, rd, intersect);” in the shaders/f.glsl file. You should see a red screen when you press r without re-opening the executive. And then you can delete the new line, and press r to see if it changes back.2. Normal Calculation (10%) Lighting models require knowledge of the surface’s normal direction to produce realistic effects. In previous assignments, surface normals were read directly from the mesh files, but in this assignment, you need to manually compute the normal for the sphere in shaders/f.glsl. In TODO 2, “pos” is the surface point on the sphere. The global vec4 variable “sphere” defines the sphere shape. The xyz component of “sphere” is the sphere center position. The w component of “sphere” is the radius of the sphere. Given the surface point and sphere center, you should be able to calculate the normal easily. The correct implementation should have a similar view:3. Phong Shading (40%) After completing the normal calculation, you can begin implementing the Phong illumination model. Lighting calculations happen in the fragment shader, where you can access all needed lighting and material information. Each light source affects the scene with three components: ambient, diffuse, and specular, described below. Lights have an associated light color, which modulates each of the below components, and a light position, which determines the position of the light. Additionally, the object in the scene has its own color, which is multiplied by the contribution of each light, as well as material properties that determine the relative strength of each of the below components. Ambient (a). Ambient light approximates indirect lighting, where light rays bounce around the scene before striking the object. In practice, this is implemented as a simple additive term that does not depend on light or view directions.Diffuse (d). Diffuse lighting approximates the amount of light that hits the surface based on the angle between the surface normal and the incident light vector. The more closely aligned the normal vector and the light direction are, the brighter the surface. This is modeled by the cosine of the angle between these two vectors.Specular (s). Specular highlighting occurs when light is reflected off the surface directly toward the viewer. The incident light direction is reflected across the surface normal, and the cosine of the angle between the viewing direction and the reflected light direction determines the brightness of the highlight.Diffuse Specular Write your implementation in the fragment shader, shaders/f.glsl TODO 3 and 3.5. Note, TODO 3 is about shading, TODO 3.5 is the specific implementation of reflection. GLSL has a built-in function called reflect, which should not be used in your homework. The reflection equation is:Where I is the incoming vector, the N is the surface normal. The correct implementation should have this view:4. Setting Material (10%) To simulate realistic interactions between light and objects, we define material properties for each surface. We have provided default material for each object.In our framework, a material is defined by the following properties: Ambient Color (ka): The color reflected under ambient lighting. Diffuse Color (kd): The color reflected under direct light. Specular Color (ks): The color of the specular highlight. Shininess (n): Controls the size and intensity of the specular highlight. These properties are grouped into a material struct for organization.Then the lighting for a fragment is calculated as:When submitting your file, you need to change the material property of the ball (you don’t need to change the materials for ground and sky). We provide a set of parameters that you can use in your submitted file: ka = (1.0, 0.5, 0.31) kd = (1.0, 0.5, 0.31) ks = (0.5, 0.5, 0.5) n = 32 But we encourage you to test and submit in different values, (for example, make it “gold” or “pewter”), and have a better understanding how different parameters affects the lighting effects. The correct implementation should have this view:5. Specular Reflection (20%)The correct implementation should have this view:6. Hard Shadow (Bonus 10%) A hard shadow is coming from geometry occlusion. To be specific, shadows will be cast when an occluder blocks the light rays. You are encouraged to implement this feature to make the scene look more realistic.Similar to the specular reflection effect, you should rely on ray tracing technique to render the hard shadow.The correct implementation should have this view:Turn-in:Don’t wait until the last moment to hand in the assignment!For grading, the program will be compiled on Linux and run from the terminal (with Visual Studio as a fallback – please try to avoid platform-specific code (e.g., don’t #include )), run without command line arguments, and the code will be inspected. If the program does not compile, zero points will be given. If you have more questions, please ask on Piazza!Start early and good luck!

$25.00 View

[SOLVED] Cs334/ece30834 assignment #3 – map it! normal mapping + shadow mapping p0

Objective: The objective of this assignment is to help you understand texture mapping, bump mapping / normal mapping, and shadow mapping. First, you will learn how to introduce more details to the existing lighting system using normal mapping, which increases the complexity of the illusion on the surface; then you will have a feeling of how shadows improve the sense of depth and immersion in the scene.Summary: In this assignment, you are provided with a scene with several spheres placed on a plane. For simplicity, there is only one directional light source. Your first task is adding normal mapping (also called bump mapping) to the illumination system, for which you need first to calculate tangent/bitangent vectors and then, in the shader, construct the TBN coordinate system and convert the computation of lighting from world space to tangent space. In the second task, the scene will be rendered twice. In the first pass, it is rendered from the light’s perspective to get the depth map as texture. In the second pass the scene is just rendered as normal. So you have first to calculate the matrix to convert the scene from world space to the light’s viewing space (the goal is to get the depth map which will be used as a texture in the second pass), then in the shader, you need to implement a function to decide whether a fragment is in shadow.Specifics:2. Model loading and setup. The models used in this assignment are sphere.obj and plane.obj, whose format is more complex than the models in the previous assignment since texture coordinates are involved. For the specification of OBJ files, you can refer to the instructions . In config.txt, rotation matrices and translation vectors are also included to give the objects an initial position. The scene only has one directional light. You can use a keyboard/mouse to change the position of the light/objects.3. Normal mapping (60%). The normals in normal maps are in their local coordinate systems (i.e. tangent space/TBN space). To ensure correct lighting, you have to obtain the TBN system for each face and then convert all related light vectors to this system and compute the shading.● Tangent space (20%). It is a space local to the surface of the model. It consists of normal, tangent and bitangent vectors. Normal is already given as face normal. In the image below, normal is pointing out of the image, the green arrows are bitangents and the red ones are tangents. In the vertex shader, you need to set up the TBN system and then convert the related lighting vectors (light position, viewer vector and fragment position) to tangent space, before passing them to the fragment shader.● Lighting calculation (20%). The normal map is given as texCubeNorm. There are three places in the fragment shader where you need to get the normal, the light vector and the view vector (all in tangent space). If you calculate them correctly, you will see the result below (when you turn on normal mapping mode by pressing m/M):Turning off normal mapping v.s. Turning on:4. Shadow Mapping (40%). In the implementation of shadow mapping, there will be two passes. In the first pass, to decide what is visible and what is invisible (thus in shadow), you will render the scene from the light’s perspective. To generate the depth map/shadow map, this stage requires two very simple shaders (called depth_v.glsl and depth_f.glsl, already provided).● Shadow testing (20%). The shadow map is given as shadowMap in the fragment shader. Your job is to implement the function calculateShadow in the shader. In this function, you will first read the closest depth value from the depth map, then get the depth of the current fragment, and compare them to decide whether this fragment is in shadow. The current parameter light_frag_pos is the fragment position in light’s space. Feel free to add other arguments to the function if you think they are necessary. Finally apply shadow to the diffuse term (shadow should have no effect on the ambient term). You shall see the result as below:● PCF (Bonus 10%): The shadows exhibit jagged, blocky edges, which can be smoothed using percentage-closer filtering (PCF). This technique works by averaging over a 3×3 grid of texture coordinate samples.5. The functions/methods requiring your implementation will be marked “TODO”; however, depending on your particular implementation there might be other places for you to change code. You are expected to add/modify the code as necessary to ensure the application runs smoothly.Turn-in:Don’t wait until the last moment to hand in the assignment!For grading, the program will be compiled on Linux and run from the terminal (with Visual Studio as a fallback – please try to avoid platform-specific code (e.g., don’t #include )), run without command line arguments, and the code will be inspected. If the program does not compile, zero points will be given. If you have more questions, please ask on Piazza!Good luck!

$25.00 View

[SOLVED] Cs33400/ece30834 assignment #1 – project it! p0

City Roaming (Camera Transformation) Objective The objective of this assignment is helping you understand matrix transformations, coordinate systems in OpenGL and camera projection mechanisms. In this assignment we provide you a tiny city to walk in and you should be able to reach every corner of this city after correct implementation. After working on this assignment, you will learn to use linear algebra to transform the coordinates from one space to another coordinate space, and apply matrix operations to cameras for an interactive application.(a) Ground Camera (b) Overhead Camera Summary The assignment is to implement an interactive city roaming program. The program includes a lowpoly city model[1]. There are also two cameras that can be switched at any time: a ground camera and an overhead camera. The ground camera simulates the view of a person walking in the city. The overhead camera gives you a complete view of the whole scene and it supports the trackball feature (you can use the mouse dragging to rotate the scene around its center). CS33400/ECE30834 Assignment #1 – Project it!Your first task is to compute the viewing transformation of the camera. Then you have to implement the function for rotation, and use rotation & translation functions to support the transformations of the camera, including moving forward/backward and turning left/right. Another feature is part of the trackball feature, where you will implement the scroll wheel operation, so that the camera can be moved closer to/farther from the scene. Specifics 1. Start with the template from the course website. 2. Compile and run the template. Since the model is large, it will take some time to load. The figures above show what you will see when running the template (left: ground view; right: overhead view; you can press “s” to switch). The two camera instances cam ground and cam overhead are kept by class GLState. Read the code to understand how it works, and then make the changes below. 3. View Transformation. In OpenGL, a view matrix transforms the world coordinates to the view space. The function Camera::updateViewProj updates the 4×4 matrices Camera::view and Camera::proj constantly so the camera views can also be updated. Initially, in the function updateViewProj, we are using glm::lookAt to calculate the view matrix. Delete it before starting your implementation. We are using the function Camera::calCameraMat to replace glm::lookAt. Here in calCameraMat, you need to construct the view matrix using eye, center, and up vectors. In this function, utility functions are provided to compute normalization, cross/dot product, and transpose (you can also use corresponding glm functions). 4. Rotation. Implement your own version for computing the rotation matrix in Camera::rotate. Convert degrees to radians. Condition the result on three cases (rotate around x/y/z axis). 5. Turning Left/Right. In the ground view, the camera should be able to turn left or right. Implement this in Camera::turnLeft and Camera::turnRight. Use Camera::rotate (implemented in the previous step) for this task. Use Camera::rotStep as the rotation speed (how much to rotate between two frames). 6. Moving Forward/Backward. In the ground view, the camera should be able to move around. Implement this in Camera::moveForward and Camera::moveBackward. Use the provided Camera::translate. Use Camera::moveStep as the movement speed. 7. Scroll Wheel Interactions. In the overhead view, scrolling the mouse wheel should move the camera closer to or farther from the scene. Implement this in the function GLState::offsetCamera. Update the coordinates of the overhead camera (cam overhead). Set two cut-offs to prevent the camera from being pushed too close or too far away. If you don’t have a scroll wheel, use I to zoom in and O to zoom out. CS33400/ECE30834 Assignment #1 – Project it!8. Supported Operations. After implementing the above, your program should support the following operations: • Mouse Controls (only in overhead view): – Left click + drag to rotate the camera. – Scroll wheel(I+O) to zoom in/out. • Keyboard Controls (only in ground view): – A: Turn left. – D: Turn right. – W: Move forward. – X: Move backward. – Z: Move down. – C: Move up. – S: Switch between the two cameras. 9. Debugging Functions. Use the provided functions to debug your implementation: Scene::printMat3, Scene::printMat4, Scene::printVec3, Scene::printMat4 10. Notes You are NOT allowed to use glm::lookAt, glm::scale, glm::rotate, glm::translate anywhere in your code. The code lines requiring your implementation are marked “TODO”; under each TODO. there are more detailed instructions guiding you through the tasks. However, depending on your particular implementation there might be other places for you to change code. You are expected to add/modify the code as necessary to make sure the application runs smoothly. Turn-in Instructions Don’t wait until the last moment to hand in the assignment! For grading, the program will be compiled on Linux and run from the terminal (with Visual Studio as a fallback – please try to avoid platform-specific code (e.g., don’t #include )), run without command line arguments, and the code will be inspected. If the program does not compile, zero points will be given. Good luck! [1] The model is taken from

$25.00 View

[SOLVED] Cop4521 assignment 5- a basic flask website p0

• Apply knowledge of relational database and SQL in practical scenarios. • Develop a basic frontend web application that interacts with a database for a small scale real-world applications using Flask (and Python). Description: You can do this assignment in a group of two. In this assignment, you will use Python, SQLite3, and the Flask library to create a Flask website with the following pages: • Home • Add a New Baking Contest User • List Baking Contest Users • List Contest Results • Results The Home page should have three links • Link – Add new Baking Contest User – (opens the add a New Baking Contest User page) • Link – List Baking Contest Users – (opens the List Baking Contest Users page) • Link – Baking Contest Results – (opens the list Contest Results page) The following is an example of the home page:The Add a New Baking Contest User page should have • a label and input text field for each possible attribute other than the id (Name, Age, Phone Number, Security Level, Login Password). • a Submit button. When the Submit button is clicked: the values entered by the user should first be validated. If all inputs are valid then a record is added to the Baking Contest People table with the values entered by the user and a record added message is sent to the result page to display to the user. Otherwise, an error message is created indicating all input errors. This message is sent to the result page to display to the user. Input Validation Rules: • The Name is not empty and does not only contain spaces • The Age is a whole number greater than 0 and less than 121 (Hint: • The Phone Number is not empty and does not only contain spaces • The Security Level must be a numeric between 1 and 3. (Hint: • The Login Password is not empty and does not only contain spaces The following is an example of the Add a New Baking Contest User page (before any values are entered):The following is an example of the page after values have been entered:The following are examples of the different results pages after the user clicks the Submit button: Success record added:Various validation error messages:The List Baking Contest Users page should have • a table that displays the following information for every record in the Baking Contest People table: Name, Age, Phone Number, Security Level, and Login Password • Link – Go back to home page The following is an example of the List Baking Contest Users page:The List Contest Results page should have • a table that displays the following information for every record in the Baking Contest Entry table: Entry Id, User Id, Name Of Baking Item, Num Excellent Votes, Num Ok Votes, and Num Bad Votes • Link – Go back to home pageThe following is an example of the List Contest Results page:The Results page should have • display the value of the variable msg (Note this message could be a recorded added message or an input validation error record not added message) • Link – Go back to home page. Examples were given earlier. Submission: Put all of your programs and files that are needed to run the website in a tar file. Name the tar file lastname_firstinitial_flaskwebsite.tar, and submit through canvas. Make sure that your website works on linprog before you submit your program. Grading Policy: • Include the basic header for assignments in a README.txt file in the root directory of your website (5 points) • Home page functions correctly (10 points) • Add a New Baking Contest User page functions correctly. (15 points) • List Baking Contest Users page functions correctly. (15 points) • List Contest Results page functions correctly. (15 points) • Results page function correctly (10 points) Note: Make sure you develop and test thoroughly on linprog before you submit the files. You will need to create initial SQL database tables for testing purposes.

$25.00 View

[SOLVED] Cop4521 assignment 6- enhanced and hardened flask website p0

• Apply the knowledge of computer security to enhance the security of the basic website developed in Programming Assignment 5 • Experience with role-based access control • Experience with using encryption to partially support data confidentiality in the data stored in the database Description: This assignment is an extension of Programming Assignment 5 and can be done in a group of two. In this assignment, you will use Python, SQLite3, the Flask library, and a cryptography library to create an enhanced and hardened Flask website with role-based access control and (partial) data confidentiality. You will first enhance the website as described in the following and then add support for data confidentiality by encrypting sensitive fields in the database. Start with the website that you created in Assignment 5. You will first add a login page (‘/login’). The login page contains two input boxes for username and password and a login button: • Textboxes for username and password • Button for Log In. When this button is clicked, the system validates username and password (against the information stored in the user table). If the username and password combination is valid, the system opens the home page for that user. Otherwise, the system notifies user “invalid username and/or password!” and stays on login page. An example of the login page is as follows:The following is an example of the page when the login is unsuccessful.When the user logs in successfully, the system goes to the home page of the user, which will be extended from the home page in Assignment 5 as follows: • Add a logout link (logs the user out and goes to the login page) • Display username at top • Only display links available to the user based on his/her role in the system, which is indicated by attribute SecurityLevel (SecurityLevel is stored in the user table). Depending on the value (1, 2 or 3) of SecurityLevel, a user can have different roles in the system and thus will be given different options of operations for them to perform in the system.For users whose SecurityLevel = 1, the home page should have four items: • Name on the top • A link to Show my Contest Entry Results page • A link to Add new Baking Contest Entry page • A logout linkAn example is shown below:For users whose SecurityLevel = 2, the home page should have five items: • Name on the top • A link to Show my Contest Entry Results page • A link to Add new Baking Contest Entry page • A link to List Baking Contest Users page • A logout link An example is shown below:For users whose SecurityLevel = 3, the home page should have seven items: • Name on the top • A link to Show my Contest Entry Results page • A link to Add new Baking Contest User page • A link to Add new Baking Contest Entry page • A link to List Baking Contest Users page • A link to Baking Contest Entry Results page • A logout link An example is shown below:If a user is NOT logged in and tries to access a page (which should not be allowed per their SecurityLevel), the system should redirect such request to the login page:If a user is logged in and tries to access a page that should not be allowed per their SecurityLevel : notify the user “Page not found”.The Show my Contest Entry Results page should have • A table displaying the following information for the current user from the Baking Contest Entry table: NameOfBakingItem, NumExcellentVotes, NumOkVotes, and NumBadVotes • A Go Back to Home Page link • A Logout linkFollowing is an example of this page:The Add a Contest Entry page should have • a label and input text field for each possible attribute other than the ids: Name of Baking Item, Number of Excellent Votes, Number of OK Votes, and Number of Bad Votes. • Submit button When the Submit button is clicked: the system should validate the values entered by the user. If all values are valid, then a record is added to the Baking Contest Entry table with the values entered by the user and the user id of the user currently logged in then a record added message is sent to the result page to display to the user. If some of the values are invalid, an error message is created indicating all the input errors and displayed to the user in the result page. The validation is similar to that in the Add a new Baking Contest User page in Assignment 5. Please refer to the results page in Assignment 5 for potential images. Input Validation Rules: • The Name of Baking Item is not empty and does not only contain spaces. • The Num Excellent Votes is an integer greater than or equal to 0 (Hint: . • The Num Ok Votes is an integer greater than or equal to 0 (Hint: • The Num Bad Votes is an integer greater than or equal to 0 (Hint:Following is an example of this page:1. Three fields, Name, PhNum, and LoginPassword must be encrypted in the SQLite database using an external encryption library. 2. The website must have the same behavior as the site with no encryption of the fields in the database. The website front end will encrypt and decrypt data to and from the database. The changes to your Flask website to realize this functionality include (but not limited to) the following: • Login page: Data values for UserName, UserPhNum, and LoginPassword should be encrypted before they are used to query the table. • Add a new Baking Contest User page: data values for Name, PhNum, and LoginPassword should be encrypted before they are added to the table. • List Users page: data values for the following fields Name, PhNum, and LoginPassword should be decrypted after they are pulled from the database table. Submission: Put all of your programs and files that are needed to run the website as well as supporting documents in a tar file. Name the tar file lastname_firstinitial_hardenedflaskwebsite.tar, and submit through canvas. Make sure that your website works on linprog before you submit your program. If your website with encrypted database is not completely functional, your submission should contain two subdirectories, one for the website without encryption, and one for the website with encryption (which is incomplete). Grading Policy (total 70 points, built-in 5 extra points): • Include the basic header for assignments in a README.txt file in the root directory of your website. The README.txt should also include clear instruction about how to initialize your website (e.g. run what scripts to initialize the database), how to start the website, how to login as users of different security levels (username and password), and the status of your website (which part is not completely functional, whether the website is fully functional with encrypted database fields, etc) (10 points). • Python scripts to initialize each of the tables in your database. In particular, the Python script to initialize the Baking Contest People table must meet the following requirements: o You must have a drop table command at the beginning of your script to drop the table in your database. o You must have a create table statement. o Your table must contain the following attributes: UserId, Name, Age, PhNum, SecurityLevel, and LoginPassword. o The fields Name, PhNum, and LoginPassword must be encrypted in the database. o The table create script must properly define UserId as the primary key. o Your script must run to completion without errors. o Your script must create at least 3 users, one for each security level for testing purposes. o At the end of the script, you must display all data in your table. In the display, Name, PhNum, and LoginPassword must be in unencrypted. Scripts to initialize other tables should add a few entries to the tables for testing purposes. (5 points) • Login page, logout link, and session function correctly (10 points). You will get 8 points if the page is functional, but does not work with the database with encryption (or your database does not have all of the three encrypted fields). • 3 different types of homepages (corresponding to SecurityLevel 1, 2, 3) (15 points). 5 points for each page. You will get 4 points if the page is functional, but does not work with the database with encryption (or your database does not have all of the three encrypted fields). • Login error and page not found error (access error) pages are triggered properly (5 points). • Show my Contest Entry Results page functions correctly (5 points) • Add a Contest Entry page functions correctly (5 points) • Add Baking User page functions correctly with encryption (5 points) • List Contest Users page functions correctly with encryption (5 points) Note: • Make sure you develop and test thoroughly on linprog before you submit the files. • All pages specified in Programming Assignment 5 will be tested.

$25.00 View

[SOLVED] Cop4521 assignment 4- developing a networked application p0

Description: You can do this assignment either by yourself or in a group of 2 people. In this assignment, you will develop an online Tic-Tac-Toe game application that allows two people to play the TicTacToe game over the Internet. The application consists of two programs, a tic-tac-toe server and a tic-tac-toe client. When playing the game, the person who is the second to make a move should start the server program on a machine with a selected port as a command line argument. Then, the person who will move first will start a client to connect to the server (server machine name and port number as command line arguments), and make the first move. After that, the two players will take turns to make moves until the game is over. The game finishes when one of the players wins the game or when the board is filled and no more move is possible; in which case, the game is a tie. When a game finishes, the app reports the results to both players. After that, the server loops back to wait for the client to start another game while the client exits (the player can restart the client to play the next game). An example game trace is shown below:Server (second player): python3 server.py 55000 server.py 55000 Waiting for opponent to connect…. Receive opponent connection from (‘128.186.120.190’, 33030) 1 2 3 A . . . B . . . C . . . Waiting for opponent’s first move. Don’t type anything! 1 2 3 A # . . B . . . C . . . Your opponent played A1, your move([ABC][123]): B2 1 2 3 A # . . B . O . C . . . Wait for your opponent move (don’t type anything)! 1 2 3 A # # . B . O . C . . .Your opponent played A2, your move([ABC][123]): B1 1 2 3 A # # . B O O . C . . . Wait for your opponent move (don’t type anything)! 1 2 3 A # # # B O O . C . . . Your opponent won the game! Waiting for opponent to connect….Client (first player): python3 client.py linprog5 55000 1 2 3 A . . . B . . . C . . . Enter a move([ABC][123]): A1 1 2 3 A # . . B . . . C . . . Wait for your opponent move (don’t type anything)! 1 2 3 A # . . B . O . C . . . Your opponent played B2, your move([ABC][123]): A2 1 2 3 A # # . B . O . C . . . Wait for your opponent move (don’t type anything)! 1 2 3 A # # . B O O . C . . . Your opponent played B1, your move([ABC][123]): A3 1 2 3 A # # # B O O . C . . . Congratulations, you won! Submission: Name your Tic-Tac-Toe game server and client lastname_firstinitial_tictactoeserver.py and lastname_firstinitial_tictactoeclient.py, respectively. Put the two programs as well as any other supporting files in a tar file and submit through canvas.Grading (70 points total): • Include basic header (template at the course website) for assignment and name your programs lastname_firstinitial_tictactoeserver.py and lastname_firstinitial_tictactoeclient.py. List team members for the assignment in the basic header. (10 points) • Successfully start the server on a port as the command line argument (5 points) • Successfully establish the connection between the client and the server; servers reports the client’s IP address and port number (5 points) • Successfully display the initial board on both client and server (5 points) • First player (client) successfully makes the first move; the move is displayed on both the client and server (10 points) • Second player (server) successfully makes the first move; the move is displayed on both the client and the server (5 points) • Server and client report wrong moves when they happen (move outside the board, play on a position that has already been occupied, etc) (5 points) • Successfully play to the end of the game with a winner and declare the winner to both the client and the server (10 points) • Successfully play to the end of the game that is a tie and declare the result to both the client and the server (5 points) • Server loops back to wait for another connection while client exits after a game is finish (5 points) • Server starts the second game correctly (5 points). • You can assume that the players take turns to the end of the game nicely. If other situations such as players making moves out of order or client/server crashing/losing connection in the middle of the game, the program’s behavior can be non-deterministic.Notes: • A reference implementation has about 150 lines of code for each of the client and the server. The code size can be smaller if the game engine is in a separate module.

$25.00 View

[SOLVED] Cop4521 assignment 3- compute prime triplet with multiprocessing p0

• Practice parallel programming with multiprocessing • Experience with performance optimizationDescription: A prime triplet is a set of three prime numbers where the smallest and the largest differ by 6. For example, (5, 7, 11) and (13, 17, 19) are prime triplets. In number theory, there is a prime triplet conjecture, which states that there are infinitely many prime triplets. In this assignment, you will write a multi-process Python program that prompts the user to input an integer number (positive or negative), and then display the smallest prime triplet whose smallest prime is no less than the number entered. Note that prime numbers are positive numbers (the smallest prime is 2). The multi-process program should also allow the user to specify the number of worker processes used in the program in the command line argument. For example, “python3 assignment3.py 16” would run the program with 16 worker processes.Submission: Name your program lastname_firstinitial_mpprimetriplet.py and submit on Canvas.Grading (70 points total): • Programs with a runtime error for legitimate input (including 1000,000,000,000) will get no more than 5 points. • Programs that do not use more than one process will get no more than 5 points. • Include basic header (template at the course website) for assignment and name your program lastname_firstinitial_mpprimetriplet.py. In the header, summarize the program execution times for two inputs 1,000,000,000,000 and 3,000,000,000,000 with 1, 2, 4, 8 worker processes (10 points). • Include correct time measurement code to measure and output program execution time excluding the user input time (5 points). • Allow the user to specify the number of worker processes in the command line argument (5 pts) • Pass correctness test cases for 1 and 2 worker processes. The program will be tested on randomly generated test cases from negative number to 4,000,000,000,000. (20 points) • Pass correctness test cases for 1, 2, and 4 worker processes and has at least a speedup of 1.2 for sufficiently large input with multiprocessing (versus using one worker process). The program will be tested on randomly generated test cases from negative number to 4,000,000,000,000. (10 points) • Pass correctness test cases for any number of worker processes between 1 and 16 (inclusive) and has at least a speedup of 1.2 for sufficiently large input with multiprocessing (versus using one worker process). The program will be tested on randomly generated test cases from negative number to 4,000,000,000,000. (5 points) • Compute prime triplet using multiprocessing with no more than 16 worker processes in less than 4 seconds on linprog for N=1,000,000,000,000 and have similar speedups for other sufficiently large input values. Only programs that pass the correctness tests 100% can get the points in this item. (7 points) • Compute prime triplet using multiprocessing with no more than 16 worker processes in less than 2 seconds on linprog for N=1,000,000,000,000 and have similar speedups for other sufficiently large input values. Only programs that pass the correctness tests 100% can get the points in this item. (3 points) • Compute prime triplet using multiprocessing with no more than 16 worker processes in less than 1 second on linprog for N=1,000,000,000,000 and has similar speedups for other sufficiently large input values. Only programs that pass the correctness tests 100% can get the points in this item. (3 points) • Compute prime triplet using multiprocessing with no more than 16 worker processes in less than 0.8 second on linprog for N=1,000,000,000,000 and has similar speedups for other sufficiently large input values. Only programs that pass the correctness tests 100% can get the points in this item. (2 points) • +5 extra points for the first person to report an error in the provided code.Notes:• Writing efficient parallel code requires using efficient sequential code as the baseline. For example, if the sequential code to be parallelized runs for more than 30 seconds when N=1000000000000, it is then virtually impossible to reach 2 seconds with multiprocessing on linprog using 16 worker processes. • A sequential program is provided, which runs for about 5 seconds when • The provided code has the timing logic for your reference. • A reference implementation has about 180 lines of code. Parallel programming is inherently challenging. If you never wrote parallel programs before, you would encounter various obstacles doing the assignment. Start NOW and prepare to spend a lot of time on it. • You can find the parallel programming design pattern in the example code that we discussed in class (pi_mw.py and primes_mw_sort.py) and apply the pattern in this assignment.

$25.00 View

[SOLVED] Cop4521 assignment 1-game simulation and evaluation of strategies p0

Objectives: • Practice problem solving using PythonDescription:Consider a simple game that works as follows: the game involves two players who can choose to either cooperate or defect. If both players cooperate, each receives a reward of 3 coins. If one player cooperates and the other one defects, the defector receives 5 coins, and the cooperator receives 0 coins. If both defect, then each receives 1 coin. If the game is played only once, which is the better strategy, to cooperate or to defect?Now, consider playing this game over many iterations against different types of players using various strategies. The goal is to accumulate as many coins as possible. In this assignment, you will write a Python program to study different strategies based on the history of both the player’s and the opponent’s choices. You will implement the following strategies (note: your implementation must follow the specifications described here exactly): • alwaysCooperate: The player always chooses to cooperate. • alwaysDefect: The player always chooses to defect. • probeAndLock: The player defects for the first 20 rounds, then cooperates for the next 20 rounds. After these 40 probing rounds, the player compares the total rewards from each phase. If the 20 rounds of defection yielded a higher reward than the 20 rounds of cooperation, the player will stick with defection for the remainder of the game. Otherwise, the player will switch to cooperation for the rest of the game. • continuousProbe: The player defects in the first round and cooperates in the second round. After that, before each round, the player calculates the average reward obtained when choosing defection and the average reward when choosing cooperation. The player then chooses the action that has yielded the higher average reward so far. • defectUntilCooperate: The player always defects until the opponent cooperates. Once the opponent cooperates for the first time, the player switches to always cooperating for the rest of the game. • opponentCooperatePercentage: The player decides based on the percentage of times the opponent has chosen to cooperate in the game so far. If the opponent’s cooperation rate exceeds a certain threshold, the player chooses to cooperate; otherwise, the player defects. You will implement three variations of this strategy using different threshold values: • opponentCooperate10Percentage (threshold = 10%) • opponentCooperate50Percentage (threshold = 50%) • opponentCooperate90Percentage (threshold = 90%) • random50: The player randomly chooses between cooperation and defection, each with a probability of 50%. In addition to the 9 strategies described above, you will also design and implement one original strategy of your own—ideally, the most effective strategy when competing against the 9 predefined strategies.Implementation details:Each strategy should be implemented as a separate function that takes the action history of both players as parameters. The function name should follow the format: strategy_strategyname. For example, the probeAndLock strategy can be implemented as follows (include the following code segment in you submission): defect = 0 cooperate = 1def strategy_probAndLock(myHistory, oppHistory): if (len(myHistory) < 20) : return defect elif (len(myHistory) < 40) : return cooperate else: reward1 = rangeReward(0, 20, myHistory, oppHistory) reward2 = rangeReward(20, 40, myHistory, oppHistory) if (reward1 > reward2): return defect else: return cooperateIn the code, rangeReward(beg, end, myHistory, oppHistory) computes the total rewards in the range of rounds from beg to end-1. To evaluate the strategies, you will implement the logic that simulates a game between two strategies. For each strategy, the program will simulate the game for a specified number of rounds (provided via a command-line argument) against each of the other strategies and calculate the total rewards. The total rewards for each strategy will be output at the end of the game. Your program should accept two optional command-line arguments: • num_of_iterations: Specifies the number of rounds played between two strategies. • num_of_strategies: Specifies the number of strategies to evaluate (starting from the top of the strategy list). Default values: • num_of_iterations = 2000 • num_of_strategies = 8 The program can be executed as follows (you can use this output as test cases): python3 sample_assignment1.py 4 2 num_of_iterations = 4, num_of_strategies = 2alwaysCooperate: 0 alwaysDefect: 20 python3 sample_assignment1.py 4 3 num_of_iterations = 4, num_of_strategies = 3alwaysCooperate: 0 alwaysDefect: 24 probeAndLock: 24 python3 sample_assignment1.py 4 4 num_of_iterations = 4, num_of_strategies = 4alwaysCooperate: 3 alwaysDefect: 32 probeAndLock: 32 continuousProbe: 24 python3 sample_assignment1.py 4 5 num_of_iterations = 4, num_of_strategies = 5alwaysCooperate: 12 alwaysDefect: 36 probeAndLock: 36 continuousProbe: 35 defectUntilCooperate: 28 python3 sample_assignment1.py 4 6 num_of_iterations = 4, num_of_strategies = 6alwaysCooperate: 21 alwaysDefect: 40 probeAndLock: 40 continuousProbe: 46 defectUntilCooperate: 32 opponentCooperate10Percentage: 32 python3 sample_assignment1.py 4 7 num_of_iterations = 4, num_of_strategies = 7alwaysCooperate: 30 alwaysDefect: 44 probeAndLock: 44 continuousProbe: 49 defectUntilCooperate: 36 opponentCooperate10Percentage: 36 opponentCooperate50Percentage: 38 python3 sample_assignment1.py 4 8 num_of_iterations = 4, num_of_strategies = 8alwaysCooperate: 39 alwaysDefect: 48 probeAndLock: 48 continuousProbe: 52 defectUntilCooperate: 40 opponentCooperate10Percentage: 40 opponentCooperate50Percentage: 42 opponentCooperate90Percentage: 42 python3 sample_assignment1.py num_of_iterations = 2000, num_of_strategies = 8alwaysCooperate: 24051 alwaysDefect: 22084 probeAndLock: 29792 continuousProbe: 30096 defectUntilCooperate: 19970 opponentCooperate10Percentage: 21964 opponentCooperate50Percentage: 18086 opponentCooperate90Percentage: 18086Submission: Name your program lastname_firstinitial_assignment1.py and submit on Canvas.Grading (60 points total): • A program gets at most 6 points if the program generates a runtime error. • Include basic header (template at the course website) for assignment, name your program lastname_firstinitial_assignment1.py, follow the naming specification for the strategies, and include the sample code segment for the implementation of the probeAndLock strategy (10 points) • 4 iterations test case 2 strategies (20 points) • 4 iterations test case 3 strategies (3 points) • 4 iterations test case 4 strategies (3 points) • 4 iterations test case 5 strategies (3 points) • 4 iterations test case 6 strategies (3 points) • 4 iterations test case 7 strategies (3 points) • 4 iterations test case 8 strategies (3 points) • 2000 iterations test case 8 strategies (3 points) • Correct random strategy (3 points) • Your own strategy is original and works correctly (3 points). Changing a threshold or parameter value of the 9 strategies will not be considered as original. Your strategy can be very similar to one of the 9 strategies, but it must do something differently. • Your own strategy is the best among the 10 strategies with 2000 iterations per match (3 points) • Extra points: We will merge all of the original strategies developed by students in this assignment along with the first 8 pre-defined strategies (random50 will not be included). We will then have a competition among these strategies using 1000 iterations per pair of strategies. Bonus points will be awarded to the top 3 strategies: o 1st place: +20 extra points o 2nd place: +10 extra points o 3rd place: +5 extra points In the case of a tie, the tied strategies will share the corresponding bonus points equally. For example, if 10 strategies tie for the 1st place, each will receive (20 + 10 + 5) / 10 = 3.5 bonus points. • Extra points: The first person who reports a bug in the sample program (which produced the test results in this assignment description) will get 3 extra points. Note: • This program is longer and more challenging than I would like it to be. Start early.

$25.00 View

[SOLVED] Problem set 2- problem set 3 p0

15-440/15-640 Distributed Systems Spring 2025● Create a .pdf of your answers and upload to Gradescope. ● Here are some tips to make your submission easy to read and to grade. Remember, the easier you make this, the less likely we are to make grading errors. Following these guidelines will help us to focus on the technical content of your answers rather than trying to understand what you have written. ○ Don’t hand write your answers. Use Latex or Google Docs or some similar input mechanism. If you use Latex, a template can be found on the course web page. ○ Put the answer to each question on a separate page. ○ Carefully tag your pdf pages to each question on gradescope. You can use the SHIFT key to select multiple pages and associate them with a single question. ● Assume SI notation ○ 1 KB = 103 bytes, 1 MB = 106 bytes, 1 GB = 109 bytes ○ 1 Kbps = 103 bits per second (bps), 1 Mbps = 106 bps, 1 Gbps = 109 bps ○ but a byte is still 8 bits (not 10 bits) 🙂 ● Remember that you have a limit of 2 grace days totaled over all four problem sets. You can use at most one of those grace days per problem set. Although Gradescope does not track grace days, we will. Exceeding your grace days will result in a zero grade for that problem set.Question 1 (22 points)1. Frontend Server: This processes incoming client requests and renders dynamic content through lightweight independent operations. 2. Backend Server: This executes core game logic. Unfortunately, this stage is only partly scaleout compatible. 50% of the computation cycles must run on a single node (parallelization on multiple cores is fine, though). The remaining 50% of the computation cycles as well as all memory and storage needs can be fully distributed across multiple nodes.. 3. Database: This manages sharing-intensive critical data, and must run on a single node to ensure data integrity.The resources used (CPU, memory, disk) for each stage generally scales with the number of users, subject to a minimum for each instance as summarized below. So a frontend server instance with 100 users will use 20 GB of RAM and Disk, while one with 25 users will use 5 GB RAM and 10 GB disk, while one with just 5 users will use 4 GB RAM and 10 GB disk.Resources per concurrent user Minimum resources per instance of stage Stage CPU Ops (cycles) per second RAM (GB) Disk (GB) RAM (GB) Disk (GB) Frontend Server 0.5 Gops 0.2 GB 0.2 GB 4 GB 10 GB Backend Server 0.5 Gops 2 GB 1 GB 32 GB 120 GB Database 50 Mops 0.5 GB 2 GB 16 GB 300 GBThe elastic cloud service provides several different machine types with different resources and costs:Instance Type CPU (GHz, Cores) RAM (GB) Disk (GB) Hourly Cost ($) Small 2.5 GHz, 2 cores 8 100 0.08 Medium 3.2 GHz, 4 cores 32 250 0.25 Large 3.8 GHz, 8 cores 64 500 0.60 X-Large 5.0 Ghz, 20 cores 256 2000 3.25A. Assuming a constant load of 64 concurrent users playing the game, what is the most costeffective configuration of server types for the three stages of this application? Consider each stage separately. Show your work.B. As loyal clients of the cloud service provider, Michael and Zara were presented with the option to purchase the server instances at the following prices. Should they buy or rent if they anticipate a sustained workload of 64 concurrent users over three years? How much more / less does it cost to rent than buy?Instance Type Hourly Cost ($) Buy Cost ($) Small 0.08 2000 Medium 0.25 5500 Large 0.60 10000 X-Large 3.25 55000C. The game grows in popularity. During the day (12 hours), the load is a sustained 256 concurrent users. At night the load drops to 160 concurrent users. With this level of load, what is the cost difference between renting and buying over a 3 year period? (Assume the same pricing as in B above)D. What is the maximum number of concurrent users that can be supported? Which is the bottleneck resource and processing stage?Question 2 (20 points) CloudPets is a cloud-based virtual pet platform where users can adopt, train, and interact with AIpowered virtual creatures that persist across devices. Each pet is housed in a distributed ecosystem, ensuring high availability and scalability. Users can access their CloudPets from any device. A. CloudPets wants to implement some nifty features through virtualization. For each of the following scenarios, describe which encapsulation method you would use: VM (must specify Type 1 or Type 2), containers, or processes. Justify your answer in 1-2 sentences.a. Mahika is developing a multiplayer feature where users’ pets can interact and play games with other pets in the virtual world. When a user enters a multiplayer room, Mahika’s program launches a dedicated instance of the server that can only be accessed by invited users. These interactions / games typically last between a few seconds and one minute. After all users leave the room, the server will shut down.b. The CloudPets CEO wants to develop a customizable “personality” module that users can run on their personal machines. The module takes parameters from the user and spawns a single-threaded AI worker that finetunes their pet’s personality. CloudPets wants to ensure that the local CPU usage is as efficient as possible.c. Roy, a developer of the system, wants to test out some new features without affecting the production service. To do his testing, he runs copies of the entire server stack on some local machines (including his Linux desktop machine and his MacOS laptop).B. CloudPets has a service-level agreement (SLA) with Awazon Cloud (a cloud service provider) that 90% of requests will have a response time of less than 200 ms. Awazon Cloud uses VMs to scale to different request rates from CloudPets. Each VM can service 500 requests per second, and takes 30 seconds to boot.Initially, CloudPet runs on just a single VM, which is fast enough to handle the arriving requests. At time Tx, CloudPets goes viral on social media, and the number of requests jumps immediately to 1800 per second. Awazon Cloud launches more VMs to scale to this demand.a. How many additional VMs does Awazon Cloud need to launch?b. Is the SLA violated at any time? Explain with calculations.C. The Awazon Cloud engineers notice that once a VM is booted, it is terminated within a few seconds, and then a new one is booted again. Thus, the total VM time is high, yet requests must wait in long queues with slow response times. No errors are reported, and their cloud has sufficient resources. Why is this problem occurring? What is a possible fix for this?Question 3 (20 points) You have built a large scale-out web application that incorporates caching strategies. To evaluate how well the caching system works, you need to analyze how many times the system fetches data from various external sites. Each of the hundreds of VMs in your system produces a simple log file, listing each access to an external site in chronological order, one per line. You need to write analysis code that computes the frequency (count) of accesses per site, and generates output files based on the frequency. You will use MapReduce to perform this analysis.Example input data (from all of the log files) google.com amazon.com apple.com google.com spotify.com adobe.com google.com amazon.comExpected Output: N files, where each file contains a set of unique site names (one per line), where file “1” contains all the sites that were accessed once, file “2” all sites accessed twice, …The MapReduce library:For this problem, assume a MapReduce library that requires you to write a map function and a reduce function for a map-reduce job. The library runs multiple mapper processes on the input files, calling your map function on each data item (line) in the input files. The map function returns a key-value pair. All of these are grouped and sorted by key and then fed to the reducers.The library runs multiple reducer processes over which the key space is partitioned. Each reducer calls your reduce function for each unique key in its portion of the key space. The arguments to the reduce function are the key and a list of all values that were produced by the mappers with that key. The reduce function should return a tuple of a file name and a list of lines to put in the file. If more than one reducer process tries to create a file with the same name, the underscore character (“_”) and the reducer id are appended to the name and multiple files are created.Here is an example of how you use this library to write “identity” map and reduce functions: def map(input): key = input value = input return (key, value)def reduce(key, values): # values is a list of all values which were mapped to the key # reducers are partitioned on keys reduced_value = “” for value in values: reduced_value = string_append(value, “ “, reduced_value) lines = [ ] # empty list lines.append(reduced_value) return (key, lines)A. Using the MapReduce system described above, write pseudo code to perform this analysis and produce the desired output. Note that this will require two Map-Reduce jobs to be run one after the other. Write out the pseudo code for the four functions (Map and Reduce functions for each of the MR jobs) needed. Ensure it looks like the example functions above.B. Use the Example Input Data above to write the output key-value pairs for each map and reduce functions you defined above, in the order that they should be run.C. For the first MapReduce job, assume the mapper function takes 2ms per call, and that the reduce function takes 3ms per value. If you have 15 mappers and 10 reducers, what are the best and worst case scenarios and resulting execution times for a dataset with 30,000 log entries? (Consider only the first MapReduce job).D. Consider the situation in part C, but now assume that the 15 mappers each have a 1% chance of being executed on a slow machine that runs 10 times slower than a normal one. Assuming the best case scenario from part C, what would be the expected time it takes to finish this job?Question 4 (18 points) For each of the following, determine if the situation / code is fail-fast or not. Explain why/why not, and how to make it fail fast if it is not.A. A bus tracker application developed by a local Pittsburgh bus company is supposed to show its users where the buses are in real time. When a bus cannot be tracked, the application displays the last known location of the bus.B. Consider the following piece of Java code.try { foo(); // can throw exceptions } catch (Exception e) { // do nothing. }C. Consider the following piece of C code.char s[] = {‘h’,’e’,’l’,’l’,’o’}; int fd = open(“foo”, “w”); write(fd, s, sizeof(s));D. To purchase an item in an online store, the customers enter their payment details and click the “Pay” button. Each time the customer clicks “Pay,” the system checks with the server to ensure the transaction has not yet been completed before processing the payment. If the transaction has already been completed, an error message will appear.E. When a printer’s ink cartridge is empty, the printer still shows a “printing” status on the screen and moves the paper through the printer without printing anything.Question 5 (20 points) Carnegie Computer Service is expanding its Andrew File System infrastructure to meet the growing demand for distributed system storage. Two server companies are competing in the bidding: ● Pear, Inc. has a monolithic server design with all components permanently soldered together. If any component fails, the entire server has to be repaired or replaced. The system has a MTBF of 300 hours and an MTTR of 4 hours. ● SoftWork System’s modular server is composed of four separate parts (modules) and has failure independence between the modules. When a system component fails, only the failed module has to be repaired. The MTBF and MTTR statistics of the four modules are as follows:MTBF (hours) MTTR (hours) Storage Module 1000 4 Networking Module 2000 2 Metrics Monitoring Module 3000 1 Processing Module 6000 4Note that we define the term MTBF as strictly the time that a system is operational, not including the time for repairs. That is, the MTBF of a component is measured from the start of its operation to the beginning of the next failure.A. Suppose availability is defined as the percentage of time that the server is actually capable of work. What is the availability of Pear’s monolithic server? Show your calculations.B. Under the same conditions, what is the availability of SoftWork’s modular server? Show your calculations.C. Suppose Carnegie Computer Service needs to deploy 10 distributed storage servers to serve all of the users across its many campuses. Because each server stores different files with no replication, availability here is defined as the fraction of time / probability of all of the servers being up. I.e., the whole system is down if even one of the servers is down.a. What is the availability of this storage service if using Pear’s servers? Show your calculations.b. What is the availability of this storage service if using SoftWork’s servers? Show your calculations.D. Carnegie Computer Service also wants to deploy an email service. This service will use replication across multiple servers to increase availability. The service is available as long as at least one server is operational. Carnegie Computer Service wants to provide four nines of availability, i.e., ensure that the overall system availability is at least 0.9999.a. What is the minimum number of Pear servers needed to achieve this availability? Show your calculations.b. What is the minimum number of SoftWorks servers needed to achieve this availability? Show your calculations.

$25.00 View

[SOLVED] 15.094 problem set 4- p0

Problem Set 4 15-440/15-640 Distributed Systems Spring 2025● Create a .pdf of your answers and upload to Gradescope. ● Here are some tips to make your submission easy to read and to grade. Remember, the easier you make this, the less likely we are to make grading errors. Following these guidelines will help us to focus on the technical content of your answers rather than trying to understand what you have written. ○ Don’t hand write your answers. Use Latex or Google Docs or some similar input mechanism. If you use Latex, a template can be found on the course web page. ○ Put the answer to each question on a separate page. ○ Carefully tag your pdf pages to each question on gradescope. You can use the SHIFT key to select multiple pages and associate them with a single question. ● Assume SI notation ○ 1 KB = 103 bytes, 1 MB = 106 bytes, 1 GB = 109 bytes ○ 1 Kbps = 103 bits per second (bps), 1 Mbps = 106 bps, 1 Gbps = 109 bps ○ but a byte is still 8 bits (not 10 bits) 🙂 ● Remember that you have a limit of 2 grace days totaled over all four problem sets. You can use at most one of those grace days per problem set. Although Gradescope does not track grace days, we will. Exceeding your grace days will result in a zero grade for that problem set.1 Michael had just launched his new social app, PingMe, which lets users send real-time pings to friends worldwide. To ensure low latency and high fault tolerance, he set up 6 replica servers in major cities: Pittsburgh, Boston, Los Angeles, Rome, London, and Mumbai. Each server holds a replica of the app’s user status database, which updates whenever someone changes their “mood ping” and can be read by friends across the globe. It is absolutely vital that each user’s status is updated with perfect one-copy semantics, even in the presence of transient networking failures and partitions.Mahika, PingMe’s CTO, implemented a Gifford-style voting protocol to manage consistency between these replicas. Assume that for a given attempt to access a file, each server has a probability of p =0.6 of being available. Assume further that the availability does not change for the duration of an operation (i.e., retransmission won’t help). For all the questions below, show your reasoning and intermediate steps.A. All servers have equal votes, and you wish to maximize read availability. a. What read and write quorum sizes would you assign?b. What is the probability of a successful read using these quorum sizes?c. What is the probability of a successful write using these quorum sizes?B. All servers have equal votes, but you now wish to maximize write availability. a. What read and write quorum sizes would you assign?b. What is the probability of a successful read using these quorum sizes?c. What is the probability of a successful write using these quorum sizes?C. Each app user determines which servers to access during a read operation. Assume that read quorums are of size 3, and that replicas are accessed in parallel. You have obtained the following table of operation latencies for clients in Philadelphia, PA and Athens, Greece to each of the servers described above:a. For optimal performance to access a small (e.g., 1-byte) file in the absence of failures, which servers should the Philadelphia client choose in its read quorum? Which servers should the client in Athens choose?b. After deploying the server selection described in (a) above, you find that even in the absence of failures, the load on one of your servers is significantly larger than on others even though all servers consist of identical hardware. Assume that most app users for PingMe originate in North America or Europe, with read-heavy requests. Using your answer to (a), state which server is likely to be most heavily loaded and why. Explain what you would do to eliminate this uneven load across servers. Be sure to state any assumptions you make.D. A new engineer on PingMe’s team has modified the number of votes for each replica, and has chosen a quorum size of 3 for both read and write operations. With the new vote assignments shown below, is one-copy semantics preserved? If yes, explain why. If no, provide a scenario in which one-copy semantics is broken.2 Each of the following subproblems shows a sequence of messages in a system using Paxos. Identify if the sequence could be a valid sequence in a correct Paxos implementation or not. (Note: these sequences are not necessarily from the start of a new consensus / election period) If it is invalid, identify the line number of the first line that is not correct, and explain what is wrong and what possible bugs could arise from it.A. Assume there are 3 total nodes (S1, S2, S3) 1. S1➔S1: Prepare(101) 2. S1➔S1: Promise(101, null) 3. S1➔S3: Prepare(101) 4. S3➔S1: Promise(101, null) 5. S1➔S1: PleaseAccept(101, v1) 6. S2➔S3: Prepare(101) 7. S1➔S1: Accept_OK() 8. S1➔S3: PleaseAccept(101, v1) 9. S3➔S1: Accept_OK() 10. S3➔S2: Promise(101, null) 11. S2➔S3: PleaseAccept(101, v2) 12. S3➔S2: Accept_OK()B. Assume there are 10 total nodes (S1, S2, S3, S4, S5, S6, S7, S8, S9, S10) 1. S1➔S1: Prepare(101) 2. S1➔S1: Promise(101, null) 3. S1➔S2: Prepare(101) 4. S1➔S3: Prepare(101) 5. S2➔S1: Promise(101, null) 6. S3➔S1: Promise(101, null) 7. S1➔S5: Prepare(101) 8. S1➔S7: Prepare(101) 9. S5➔S1: Promise(101, null) 10. S1➔S8: Prepare(101) 11. S1➔S10: Prepare(101) 12. S8➔S1: Promise(101, null) 13. S7➔S1: Promise(101, null) 14. S1➔S1: PleaseAccept(101, v1) 15. S10➔S1: Promise(101, null) 16. S1➔S2: PleaseAccept(101, v1) 17. S1➔S1: PleaseAccept(101, v1) 18. S1➔S5: PleaseAccept(101, v1) 19. S1➔S1: Accept_OK() 20. S1➔S4: PleaseAccept(101, v1) 21. S5➔S1: Accept_OK()C. Assume there are 2 total nodes (S1, S2) 1. S1➔S1: Prepare(101) 2. S1➔S1: Promise(101, null) 3. S2➔S2: Prepare(102) 4. S2➔S2: Promise(102, null) 5. S2➔S1: Prepare(102) 6. S1➔S2: Promise(102, already_accepted(101, v1)) 7. S2➔S1: PleaseAccept(102, v1) 8. S1➔S2: Accept_OK()D. Assume there are 3 total nodes (S1, S2, S3) 1. S1➔S1: Prepare(101) 2. S1➔S1: Promise(101, null) 3. S1➔S3: Prepare(101) 4. S3➔S1: Promise(101, null) 5. S2➔S2: Prepare(102) 6. S2➔S2: Promise(102, null) 7. S2➔S3: Prepare(102) 8. S3➔S2: Promise(102, null) 9. S2➔S3: PleaseAccept(102, v1) 10. S1➔S3: PleaseAccept(101, v2) 11. S3➔S2: Accept_OK() 12. S3➔S1: Accept_OK() 13. S2➔S2: PleaseAccept(102, v1) 14. S2➔S2: Accept_OK()3 Helen, Monica, Mahika, and Michael each have an account at a different branch of PNCC Bank. Each branch handles local transactions independently. For transactions involving accounts at different branches, PNCC uses the two-phase commit (2PC) protocol to handle the distributed transaction. Each branch handles updates to its local accounts and contacts account owners (by text message) to verify if they approve of each transaction.Account Owner Branch name Helen Aspinwall Monica Bloomfield Mahika Carrick Michael Duquesne HeightsExample transaction: Consider a transaction of the following form: Bloomfield: deposit 100 to Monica’s account Carrick: withdraw 100 from Mahika’s accountThis means we want to transfer 100 dollars from Mahika’s account to Monica’s account. The Bloomfield branch (B) will check by sending a text message to verify that Monica approves of this transaction before voting yes. The Carrick branch (C) checks if there is enough money in Mahika’s account and whether she approves before voting yes. The transaction commits only if both branches vote yes. Note that any of the branches can act as the coordinator.A. Helen wants to withdraw $200 from Monica and Mahika. She will send a transaction request involving:Aspinwall: deposit 400 to Helen’s account Bloomfield: withdraw 200 from Monica’s account Carrick: withdraw 200 from Mahika’s accountSuppose Monica turns off her phone during her midterm exam, but forgets to turn it back on until after the weekend. Describe how the 2PC protocol will progress in this situation and what the results will be at each node. State any assumptions clearly.B. Currently, the 2PC implementation for the bank stops resending decisions after 20 minutes in the COMMIT phase. In 1-2 sentences, describe a potential negative consequence for the scenario above and how to fix it.C. Every day, each branch reboots its system at 5 am. After rebooting, each system reads its write-ahead log to resume all processes and ensure ongoing transactions complete successfully. Careful review of the recovery code has confirmed that it is bug free. Despite this, a bug of the following form often manifests itself:Aspinwall: withdraw 100 from Helen’s account Carrick: withdraw 100 from Mahika’s account Duquesne Heights: deposit 200 to Michael’s accountThe bug is that although Michael is credited with $200, only Helen’s account has $100 deducted – Mahika’s account is unchanged.a. What implementation bug could have caused this?b. Which ACID property is violated?c. Propose a method to fix this problem.Question 4 (20 points) Melody and Ruiqi are trying to book tickets to see the newest anime star: Rika Furude. The venue keeps a database of available seats, prices, and revenue by seating section. The initial contents are as follows:Section seats_remaining price revenue Main 220 200 8000 Balcony 180 150 3000 Wing 80 40 2000 Mezzanine 60 50 1000A. The ticket purchase system uses intention lists for the database to help ensure consistency and fault tolerance.Suppose three concurrent transactions with the following pseudocode are in progress (starting with initial values above):time T1 T2 T3 1 BEGIN BEGIN 2 Balcony.seats_remaining = Balcony.seats_remaining – 1 BEGIN 3 Main.seats_remaining = Main.seats_remaining – 10 4 Balcony.revenue = Balcony.revenue + Balcony.price Wing.seats_remaining = Wing.seats_remaining + 50 5 Main.revenue = Main.revenue + Main.price * 10 * 0.9 Wing.price = Wing.price – 206 COMMIT Main.price = Main.price + 10 Wing.seats_remaining = Wing.seats_remaining – 25 7 ABORT Wing.revenue = Wing.revenue + Wing.price * 25 * 0.9a. Write the intention lists for each of the given transactions after time = 7.b. Suppose the system crashed at time 8. Immediately after the crash and before recovery, what are the values stored in the database for each section?c. During recovery, what operations would recovery redo (if any)? What operations would recovery discard?B. Ruiqi believes the ticket purchase system can be much more efficient if implemented with a Write-Ahead Log (WAL), and proceeds to build such a system. Anything to ensure she can see Rika!Assuming the system starts in the same initial state as specified above, consider the following sequence of operations from two concurrent transactions:time T1 T2 1 BEGIN 2 SR = Main.seats_remaining BEGIN 3 SR = SR – 2 P = Balcony.price 4 Main.seats_remaining = SR 5 R = Main.revenue 6 P = Main.price P = P + 50 7 R = R + P * 2 Balcony.price = P 8 Main.revenue = R 9 COMMIT P = Main.price 10 P = P – 10 11 Main.price = Pa. After time = 11, show the contents of the WAL. Clearly distinguish between the portions on disk and the portions in volatile memory. Clearly indicate for each log record which transaction it belongs to.b. Melody suggests to Ruiqi that we can perform log truncation for all the logs made before time = 10 because the T1 has committed by then. Is she correct? Why or why not?c. Suppose the system crashes after time = 11, which operations will be redone? Why? What if the system crashes again during recovery?Question 5 (20 points) You are hired as a systems architect for ScottyNet, a new social media app designed for sharing short voice messages. The app is expected to scale to 10 million users with 10 MB of stored data per user and 50 requests per day per user.A. You find servers available with 2 TB of storage that are able to process 1,000 requests a second. Each server costs $8,000 a month to rent, including unlimited networking. Assuming all clients’ requests come in uniformly during the day, how many servers would you need to support your users and what would the cost be to rent them?B. You consider a completely different implementation using a peer-to-peer (P2P) distributed hash table (DHT) using 3 replicas for each data item. Each user’s device becomes a part of the P2P system, and contributes resources. Assuming read requests and barring any failures, how many nodes will a user need to visit (i.e. how many hops are needed) to process a request in the best and worst case?C. Assuming the worst case in part B, if each hop takes 30 ms, what is the latency be per request?E. After conducting some research, you expect that 2.7% of mobile users will churn daily. This means that each day 2.7% of the mobile users leave, but are replaced with new users, so the total user population stays stable. What is the total churn cost to your users over the period of a month?F. What are some things you can do to reduce these costs to the mobile users in a P2P implementation?

$25.00 View

[SOLVED] 15.094 problem set 1 p0

15-440/15-640 Distributed Systems Spring 2025● Here are some tips to make your submission easy to read and to grade. Remember, the easier you make this, the less likely we are to make grading errors. Following these guidelines will help us to focus on the technical content of your answers rather than trying to understand what you have written. ○ Don’t hand write your answers. Use Latex or Google Docs or some similar input mechanism. If you use Latex, a template can be found on the course web page. ○ Put the answer to each question on a separate page. ○ Carefully tag your pdf pages to each question on gradescope. You can use the SHIFT key to select multiple pages and associate them with a single question. ● Assume SI notation ○ 1 KB = 103 bytes, 1 MB = 106 bytes, 1 GB = 109 bytes ○ 1 Kbps = 103 bits per second (bps), 1 Mbps = 106 bps, 1 Gbps = 109 bps ○ but a byte is still 8 bits (not 10 bits) 🙂 ● Remember that you have a limit of 2 grace days totaled over all four problem sets. You can use at most one of those grace days per problem set. Although Gradescope does not track grace days, we will. Exceeding your grace days will result in a zero grade for that problem set.Question 1 (30 points)Ruiqi has contributed the pseudo-code shown below.typedef struct gamestate { int8_t* squareval; // The 100 * 100 game board int32_t round; // The round number } game_t; // This code performs marshalling of game state; some details are omitted. char *serialize(game_t *currentstate) { /* Skipped code that defines variables … */ size_t total_size = _(1)_; char *data_packet = malloc(total_size); memcpy(data_packet, total_size, _(2)__); // Include the board int offset = _(3)_; for (int i = 0; i < 100; i ++){ // copy row i size_t row_size = _(4)_ ; memcpy(data_packet + offset + (i * row_size), _(5)_, row_size); } /* Skipped code that includes round … */ return data_packet; // packet ready for transmission } A. Fill in the missing code fragments for serialize that are highlighted above using notation such as “_(1)_”. Assume the application is running on a 64-bit system. Include a short explanation for each piece of missing code. B. Suppose that marshaling and unmarshaling stubs take 5 ms each on the client and server. Further suppose that the network has a one-way latency of 10 ms, and a bandwidth of 240 kbps. Calculate the total delay from the moment Michael completes his move, to the moment Jimmy can start thinking about his move. Show your work. C. Realizing that transmitting the entire board state can be costly in time, Ruiqi asks you to help her more efficiently transmit moves. Suppose each move can modify at most 10 squares of the board. What is your advice to Ruiqi? Give an informal explanation first, then show the data structure that you would use to transmit a move. D. Suppose the average number of board squares that change per move is 5. However, the marshalling/unmarshalling times of the stubs do not change. Compute the average size of data transmitted per move, and the average time delay for transmitting a move using a. the original encoding of game state, and b. your more efficient encoding of game state. E. Suppose the rules of the game are changed dramatically. As a result, the average number of board squares that change per move is 5000 rather than just 5. How does your answer to (D) change? F. Based on your calculations for (D) and (E), what insight can you offer about efficient maintenance of shared state between a client and a server? G. After many months of happy game playing with the original code (i.e, as shown in the pseudocode above for (A)), Jimmy upgrades his computer. Unlike his old machine, which was based on an Intel chip, the new one is based on a non-Intel chip. Alas, his games with Michael after the upgrade are disastrous. The games are totally messed up, and sometimes involve the game software crashing for Michael or Jimmy . They are dazed and confused. Can you help them? How could the hardware upgrade have resulted in such a problem? How can the problem be fixed without any further hardware changes?Question 2 (20 points) Helen is a boba recipe creator. After a prolific session of creating many new recipes, she needs to send them all to a central boba server for safekeeping. She is considering two different protocols:● Stop-and-go protocol: Helen sends a single recipe and then waits to receive an ACK from the server. She waits for at most T seconds. If no ACK arrives within time T, Helen retransmits that recipe. She repeats this indefinitely, until the ACK arrives. She then proceeds to send the next recipe, and so on until she receives the ACK for the last recipe. ● Blast protocol: Helen sends all the recipes at once and waits to receive a single ACK from the server. If no ACK arrives within time T, Helen will retransmit all the recipes again. She repeats this indefinitely, until the ACK arrives.A. Helen is at a crowded cafe near the boba shop with a one-way latency of 10 ms and a bandwidth of B bps. B. Helen is on Mars with a bandwidth of B bps and one-way latency of L seconds. C. For each of the above situations, which protocol is better? Explain why that is the case. D. For question (B), let’s remove the assumption of perfect network reliability but leave everything else unchanged. If the probability of packet loss is high, which protocol should Helen choose? Explain your reasoning.Question 3 (20 points)Consider the following end-to-end pipeline for queries from your smartphone to ChatGPT. ● The query travels on Wi-Fi from your smartphone to a router. ○ Latency (one-way, symmetric): 10 ms ○ Bandwidth (symmetric): 10 Mbps ● The router forwards your query over Ethernet to a nearby cloudlet. ○ Latency (one-way, symmetric): 1 ms ○ Bandwidth (symmetric): 100 Mbps ○ Latency (one-way, symmetric): 500 ms ○ Bandwidth (symmetric): 50 Mbps ● The ChatGPT server takes 200 ms to service any query, regardless of size. A. Suppose that a query is 10 KB in size and the response is 5 KB. What is the total end-to-end delay of this pipeline? B. ChatGPT also supports complex queries via file uploads. You upload such a complex query as a 30 MB file, which is sent as a back-to-back stream of 1000 equally sized packets (i.e. each packet is of size 30 KB). After receiving the entire file, the cloudlet realizes that it has to pass it on to ChatGPT. It again transmits the file as a back-to-back stream of 1000 equally sized packets to ChatGPT. After ChatGPT runs its model inference, it returns a response of size 5 KB. What is the total end-to-end delay for such a query?D. Suppose you use Ethernet to directly connect your smartphone to the cloudlet. So you are not using WiFI at all, and avoid use of the router. The smartphone to cloudlet connection has 10 ms latency and 1 Gbps bandwidth. What is the total end-to-end delay for (B) now? Explain your answer.Question 4 (15 points)The US Postal Service creates a new service for secure and guaranteed delivery of mail. To use this service, the customer needs to provide the contents of the letter as a human-readable document, and (separately) its destination address. An encryption machine in the receiving post office encrypts the contents of the letter, prints out the encrypted text using a printable encoding, puts that into an ultra-secure envelope to the addressee, and then seals the envelope. The original provided by the customer is shredded. That sealed envelope is transported, without opening, through the multi-hop nationwide Postal Service routes. Unlike the flimsy envelopes used by customers that are prone to splitting open and spilling their contents, these sealed envelopes are virtually indestructible even in the worst conditions.At the destination post office, the steps are reversed: the envelope is opened, the contents are decrypted by a machine, and then printed on to a document. The human-readable printed document is now placed in an envelope to the addressee, and sealed. The mail carrier delivers this envelope to the addressee.For each of the following scenarios, state whether the US Postal Service is violating the end-to-end principle or not. In each case, explain your reasoning.A. Caitlyn wants to order a new pair of shoes from FakeNike. She fills out a paper order form with the name of the shoes, her address, credit card details, etc. She then sends the filled-out form through the mail service described above. Later that day, she sees that the shoes have been further reduced in price, and decides to buy a second identical pair. She therefore places an identical order via the mail service. At the end of the day, when the receiving post office processes all its incoming mail, it notices two identical letters to the same addressee. In the interests of efficiency, it discards one of them. Think of the carbon footprint it has reduced by avoiding transport and delivery of the duplicate order! FakeNike only receives one order, and Caitlyn only receives one pair of shoes.Question 5 (15 points) A new social media app Outstagram, created by CMU Qatar students, uses a centralized server located in Qatar for all of its users and data. It allows users to post and share pictures with each other via RPC to the server. Upon receiving an uploaded image, the server will store it in a database. Although initially designed for users local to Qatar, it has quickly gained popularity at CMU Pittsburgh and built a large user base here as well. Unfortunately, these CMU Pittsburgh students are experiencing frequent long loading times while using Outstagram. They sometimes see an error screen saying “TCP timeout” when they try to upload pictures. A. Give 3 plausible and distinct reasons for the slow loading times. B. It turns out that even Qatar users are experiencing poor performance despite excellent nationwide LAN and WLAN connectivity within Qatar. Which of your 3 reasons in answer (A) are still plausible reasons now? Explain your answer.

$25.00 View

[SOLVED] 15.094 project 3- implementation and performance tuning of a scalable web service spring 2025 p0

Project 3: Implementation and Performance Tuning of a Scalable Web Service Important Dates: Submission Limits: 10 Autolab submissions per checkpoint without penalty (5 additional with increasing penalty) You will learn to: 1. Identify potential bottlenecks in a distributed system. 2. Devise experiments to confirm the nature of the current bottleneck. 3. Devise techniques to alleviate the current bottleneck. 4. Understand resource versus performance tradeoffs. 5. Identify scaling signals. 6. Experience multidimensional optimization with multiple parameters. 7. Cope with the nondeterminism that affects even very simple distributed systems. Introduction A critical advantage of cloud-hosted services is elasticity, the ability to rapidly scale out a service without needing to purchase and install physical hardware. Cloud providers allow tenant services to add additional virtual servers on-demand, enabling them to meet changes in load (rate of arriving client requests). In this project, you will implement various techniques to scale out a simulated, cloud-hosted, multi-tier web service (a web storefront). You will then evaluate the system to understand what bottlenecks manifest at different loads, and use this information to decide when to scale out which components in order to improve performance. This is a critical skill to develop, as even a very simple distributed system (as implemented in this project) can have very complex performance behavior. This project requires you to consider two types of scaling to meet client demand. First, you will look at scaling out a service by running multiple servers. Next, you will split the service into multiple tiers to improve performance. These tiers can themselves be scaled out independently. A critical question is: when should one scale out a tier by adding servers? This depends very much on the characteristics of the application itself. For example, at a given load, if you determine that the bottleneck limiting performance is the middle tier, then adding more servers for the middle tier will help. However, by doing this, you will likely shift the bottleneck elsewhere in the system, so adding even more middle tier servers will not help. Perhaps the bottleneck is now in the front-end, so you should consider adding more front-end servers. Or perhaps the bottleneck is now the database; in this case adding more servers to either front or middle tier will not help. We will provide a simulated environment for your service to run in. It is implemented in Java and your service must also be implemented in Java (see below for details). You are free to use any language you like to automate and analyze your performance experiments. You should turn in all of the code you write. This project only requires a little actual implementation work; expect to spend the majority of your time on testing and developing tools to find bottlenecks and sources of inefficiency. Your service is, notionally, an online store. Your code receives two types of requests from clients: browse requests, asking for information on available items and categories of items, and purchase requests, asking to buy an item. We provide a ServerLib class that handles the details of request processing; the code you must write is the service’s main loop. A serial (one client at a time) version of this loop would cycle among three operations: 1. Waiting for a client to connect (ServerLib.acceptConnection method). 2. Reading a request from the client (ServerLib.parseRequest). 3. Processing the request and sending the response (ServerLib.processRequest). Because of the way HTTP works, each client connection can transmit only one request. To make a sequence of requests (e.g. browse followed by purchase) the client must connect multiple times. processRequest takes care of closing the client’s socket after it sends the response to that request. The simulator generates requests from (simulated) clients following a semi-randomized pattern (see the discussion of under “How to Use the Supplied Classes”). Each client makes a series of connections, and sends one or more “browse” requests, possibly followed by a “purchase” request. If any client request is dropped, or takes too long, or results in an error, that client will give up on further requests and we say it has “left the store unhappy.” Your goal is to minimize the number of “unhappy” clients, while simultaneously minimizing the amount of cloud resources consumed. To achieve this, you cannot use a simple serial main loop; you must scale to meet the load. The simulated environment provides a simple “cloud services” API, with methods to start, stop, and get the status of simulated virtual machines (VMs). At the beginning of the simulation, there will be two VMs running: the simulator itself (VM id #0) and the initial instance of your server (VM id #1). Your server can start more VMs, each of which will also run your server’s code. The environment also provides a “back-end” database, which you will not have to interact with directly, and a load balancer that receives requests from the simulated clients and spreads them across all of your front-end VMs. Unlike the previous two projects, it is possible to run the simulator on any computer with a Java runtime. We still recommend that you do your work on the Andrew compute cluster (unix.andrew.cmu.edu). Since the load on your service is simulated, it is fine to run experiments on the Andrew cluster even when many other students are using it. In the past, students have successfully completed this project by working on a native installation of GNU/Linux on one of their own computers, instead of the Andrew cluster. However, working within some other operating system (e.g. macOS, Windows, or Linux in a hosted virtual machine) tends to make your performance measurements disagree with Autolab’s measurements. Remember that Autolab has the final say regarding your code’s performance, and that course staff can only help you with problems that can be reproduced on the Andrew cluster or within Autolab.Requirements and DeliverablesWe will provide: ● The simulated cloud service, which provides a mechanism for starting and stopping “VMs” (these are actually implemented as processes), a load balancer, a database, and a Java RMI registry. ● The simulated workload (generation of requests from clients). ● The simulated responses to client requests. ● A Java class, called ServerLib, which provides methods for accessing the simulated cloud services, registering with the simulated load balancer, and receiving and processing client requests. You will create: ● The server main loop. The simulator expects this to take the form of a Java class called Server, which provides the standard Java program entry point (public static void main(String[] args)). In this main function, you should instantiate ServerLib and use it to get and process requests from clients. Each checkpoint requires you to implement the main loop somewhat differently; see below for details. ● Benchmarking experiments, to determine the most efficient way to scale your server to the simulated load in each phase of the project. ● Reports on your design and the results of your benchmarks. Your server should do the following: ● As described above, you must provide a Server class with a program entry point method (main). The simulator will invoke this entry point (in a separate process from the simulator itself) once at the beginning of the simulation, and again (in another separate process) each time you call ServerLib.startVM.Every invocation of main will receive the same set of arguments. The VM id will be different for each instance, but you cannot, for instance, supply additional arguments to main when you call startVM. ● Your server should automatically start additional VMs as needed to handle the load. Your goal, as described above, is to ensure that most clients don’t time out. Clients will expect browse requests to be serviced within 1 second, and purchases within 2 seconds.To simulate the work that needs to be done in the front and middle tiers of a real web application, both parseRequest and processRequest are very slow. It is not possible to handle the simulated load using only a single VM. Starting new Server instances with startVM is also slow, to simulate the costs of booting an entire virtual machine. ● Your server should also automatically stop VMs that are not currently needed, to conserve resources. You can either have the Server main loop shut itself down, or you can have another instance use ServerLib.endVM to terminate extra instances.The ServerLib API has been designed to make it easy to tell whether a VM is idle. VMs that are waiting for acceptConnection to return are idle. VMs that are waiting for another VM to call a RMI remote object method are also idle. VMs that are doing simulated work inside parseRequest or processRequest, on the other hand, are busy. Submission and grading: ● You will be graded on the correctness and performance of your system. The number of clients that timeout or are explicitly dropped will be assessed, along with the total cloud costs (total VM time). ● This project will use an autograder to test your code. See below on how to submit. ● You need to submit a plot (pdf) of benchmarking results with Checkpoint 1. See below. ● You need to submit a short (1 or 2 pages) document detailing your design with Checkpoint 3. See below. ● The late policy will be as specified on the course website, and will apply to the checkpoints as well as the final submission. ● Coding style should follow the guidelines specified on the course website. In checkpoint 1, you will implement basic horizontal scaling-out of the “web server.” The simulated load will be light enough that you won’t need the three-tiered architecture shown in the figure at the top of “Requirements and Deliverables.” Instead, your Server class can implement all three of the steps of the service main loop within a single “VM.” However, you will need to be able to launch additional VMs as required to meet client demand; a serial implementation, as described in the introduction, will not be enough. You will be graded on the number of unhappy clients (those whose requests are dropped, take too long, or incur other errors), as well as the total resources (VM time) consumed by your system. (12%) As performance testing is an important aspect of designing a scaling system, you will also be graded on a simple benchmark. Run your system on a trace described by a rand_spec of “c1000-sss” (see “How to Use the Supplied Classes”, below) where sss is any random seed you wish. First run with just 1 server and measure the number of unhappy clients. Then repeat for 2, 3, 4, … servers. Create a plot, where the number of unhappy clients is on the Y-axis and the number of servers on the X-axis (make sure to label the axis of your plot and ensure that it is readable). Submit a PDF that contains this plot and a one-paragraph description of results to autolab. Be sure to note the shape of the curve, any “knee” or bend in the curve, and explain what the plot suggests is a good number of servers for this load. (5%) Diurnal load curveCheckpoint 2 increases the challenge in two ways. First, the load will no longer be predictable based on the time of day, although it will remain at the same rate over any one test run. Therefore, you will need to implement dynamic scaling to adapt to the load you observe in each test run. Second, the load will be high enough that you will need the three-tier architecture shown in the figure at the top of “Requirements and Deliverables.” Specifically, because both parseRequest and processRequest are very slow, it will be more efficient to have a “tier” of VMs that call acceptConnection and parseRequest but not processRequest, and a second tier of servers that only call processRequest. You will need to use RMI to pass requests from the first tier to the second tier, and you will need to scale out each tier to meet the load experienced by that tier. Remember that you should not use Autolab as an oracle. You should test your code with your own performance experiments, and only submit to Autolab when you are confident that your code can handle anything the autograder might throw at it, within the limits described above. As with checkpoint 1, you will be graded on the number of unhappy clients (those whose requests are dropped, take too long, or incur other errors), as well as the total resources (VM time) consumed by your system. Checkpoint 3 increases the challenge again: not only will the autograder use average loads even higher than in checkpoint 2, the load will no longer remain at the same level over any one test run. Your scaling logic will need to react “on the fly” to changing conditions. The average client arrival rate might go either up or down (or even oscillate!), so you will need to be able to start and stop VMs to match the observed load. Again, you will be graded on the number of unhappy clients (those whose requests are dropped, take too long, or incur other errors), as well as the total resources (VM time) consumed by your system. (28%) In your final submission for this checkpoint, you will also need to write and submit a 1–2 page document, describing the major design aspects of your project, including how you coordinate the roles of the different server instances, how you decide how many in each tier to run, and when you decide to add or remove servers. You are encouraged to include any plots from your benchmarking that indicate how many servers are needed for different arrival rates. Discuss what you have learned about scaling a service by adding tiers and by scaling out the tiers. Highlight any other design decisions you would like us to be aware of. Please include this as a PDF in your final tarball. (10%) The code in your final submission will be graded for clarity and style. (10%) Submission Process and Autograding We will be using the Autolab system to evaluate your code. Please adhere to the following guidelines to make sure your code is compatible with the autograding system. First, untar the provided project 3 handout into a private directory not readable by anyone else (e.g., ~/private in your AFS space): cd ~/private; tar xvzf ~/15440-p3.tgz This will create a 15440-p3 directory with needed libraries, classes, and test tools. In the doc subdirectory, you will find detailed documentation in HTML format for the provided Java classes. Create a subdirectory of the 15440-p3 directory; this will be your working directory. You can name it anything you like, but you must do your work in a subdirectory; the autograder will not be able to build your code if you try to work in the top level of 15440-p3. Write your code and a Makefile in your working directory. You must use a makefile to build your project. See the included sample code for an example. You will need to add the absolute path of your working directory and the absolute path of the lib directory to the CLASSPATH environment variable, e.g., from your working directory: export CLASSPATH=$PWD:$PWD/../lib The autograder will expect to be able to build your server and all support classes by simply running make in your working directory. Make sure the .java and generated .class files are in your working directory (not a deeper subdirectory). Furthermore, you must not place your classes in a java package! Leave them in the default package. Finally, as discussed above, your server class must be named Server and it must implement main. This naming convention and these relative file locations are critical for the grading system to build and run your code. To hand in your code, from your working directory, create a gzipped tar file that contains your make file and sources. It should unpack into the current directory, not a subdirectory. For example: tar cvzf ../mysolution.tgz Makefile Server.java HelperClass.java … Make sure that you include all of the .java files you have written, and any other files that are relevant for this checkpoint. However, do not include .class files or any other files that are generated by running make. Also, do not include any files from the handout; the autograder already has those files. Remember to include your .pdf writeup for checkpoints 1 and 3. You might find it useful to add a goal to your Makefile that creates this tar file. You can then log in to https://autolab.andrew.cmu.edu using your Andrew credentials. Submit your tarball (mysolution.tar.gz in the example above) to the autolab site. Alternatively, if you are working on the Andrew cluster, there is a command-line tool named autolab that you can use to submit your tarball directly. Each checkpoint shows up as a separate assessment on Autolab. You are encouraged to use version control to record your progress. However, if you back up your VCS history to a public “forge” (e.g. GitHub), make sure to use a private repository to avoid any possibility of AIVs. How to Use the Supplied Classes Cloud class The entry point for the simulator is the Java class called Cloud, provided by us. To run the simulator, ensure your CLASSPATH is set correctly (including both the lib directory and the directory with your Server class), then execute: java Cloud [] Here, specifies the port that the Java RMI registry should use. The parameter specifies a text file that will be loaded as the contents of the backend database. A sample file (db1.txt) is provided in the lib directory. The parameter is the time of day to be reported by the simulation (0–23). The optional parameter indicates the number of seconds to run the experiment; the default is 30 seconds. Finally, indicates the arrival pattern for the simulated clients. This is a code string, which can take several different forms: c-xxx-sss Constant arrival rate with interarrival time xxx ms and random seed value of sss. u-aaa-bbb-sss Uniformly random interarrival times, between aaa ms and bbb ms; random seed value of sss. e-aaa-sss Exponentially distributed random inter-arrival times, mean aaa ms, random seed value of sss spec1,duration1,spec2,duration2,…,specN Use multiple specifications, one after another. Spec1, spec2, etc. must each be one of the first three forms. Spec1 will be used for duration1 seconds, followed by spec2 for duration2 seconds, etc. and finally specN for whatever time is left in the simulation (notice that this final spec does not have a duration specified). The sum of all the durations should be less than the overall argument. The simulator will create a Java RMI registry, a database, and a load balancer, and then it will invoke your Server class in a separate “VM” (actually just a separate process). Then it will run the simulation for the specified . Finally, it will print out a report including each client’s status, the total revenue of the store, and the total VM time used. Possible client results include: failure to connect, timeout, explicitly dropped, “ok” (successful browsing, no purchase attempted), successful purchase, and bad purchase (client timed out but the purchase went through anyway). Your goal is to maximize the number of clients that report either “ok” or a successful purchase, while minimizing the total VM time used. ServerLib class The ServerLib class is your interface to all of the simulated cloud services, and also provides methods for receiving and handling requests. You must construct an instance of this class in each of your server VMs. Its constructor requires the IP address and port of the Java RMI registry created by the simulator; these are provided to your Server class as command line arguments. The APIs provided by ServerLib are outlined below. For more detail, see the JavaDoc files in the doc subdirectory of the handout. ServerLib methods for simulated cloud services int startVM() Launch another “VM.” In that VM, your Server class’s main method will be called, with the same RMI registry address and port, but a unique VM id. Returns the ID of the new VM. Keep in mind that the new VM will not be available immediately; booting takes time. void endVM(int id) Forcibly stop the VM indicated by id. You should only use this method to stop a different VM than the one making the call. To stop the VM that would make the call, unexport all of its remote objects and return from main. Cloud.CloudOps.VMStatus getStatusVM(int id) Get the status of the VM indicated by id. Returns a code from the Cloud.CloudOps.VMStatus enumeration; possible values are NonExistent, Booting, Running, or Ended. float getTime() Returns the simulation time of day, in hours and fractional hours. (Range: [0, 24)). There is intentionally no method to get the VM id of the currently running VM. This value is passed as the third argument to your main method when it is started. ServerLib methods for front-end server operations boolean registerFrontend() boolean unregisterFrontend() ServerLib.Handle acceptConnection() Get the next incoming connection from the load balancer. Returns a handle to the connection; this is an opaque object, which you can think of as a socket descriptor. Cloud.FrontEndOps.Request parseRequest(ServerLib.Handle h) Read and parse a request from a client connection. Takes ownership of the Handle passed in. This method takes a long time to complete. int getQueueLength() void dropHead() void dropConnection(ServerLib.Handle h) Drop (intentionally discard) a client connection that has been returned by acceptConnection but not yet passed to parseRequest. ServerLib methods for middle-tier (aka application server) operations void processRequest(Cloud.FrontEndOps.Request r) Process request r, send back the reply, and close the connection. This method takes a long time to complete. void dropRequest(Cloud.FrontEndOps.Request r) Drop (intentionally discard) the request r. Notes and Hints 1. Remember that the focus of this project is performance tuning and analysis. You should not have to write a lot of code for your Server implementation. Instead, expect to spend the majority of your time on testing and developing tools to find bottlenecks and sources of wasted VM time. 2. Remember that a client will go away unhappy if its request was dropped, took too long to get a reply, or if some error occurred. Happy clients are those that end up with status “purchased” or “ok”; all others are unhappy clients. You need to try to minimize unhappy clients. 3. Repeat your experiments (especially for the randomized-arrival workloads). 4. You should try running your system at various loads (adjust the value to change the arrival rate of clients) to determine how many clients you can handle with different numbers of servers. 5. You should use Java RMI to communicate between different instances of your Server class. You do not need to create your own registry, though—you can use the one set up by the Cloud class. 6. VMs that have instantiated objects with RMI interfaces (extending UnicastRemoteObject) will not stop upon returning from main. They will continue running until forcibly stopped (by endVM) or until all of the RMI interfaces have been unexported. One way to deal with this is to include “shutdown” methods in each RMI interface, like this: public void shutdown() { UnicastRemoteObject.unexportObject(this, true); } 8. The drop methods exist because you might get into a situation where there is no way to handle all the pending clients before they time out. Explicitly shedding load allows you to avoid doing work for clients that you know you cannot serve fast enough. 9. The arrival rate in the diurnal load curve, and as specified in the parameter to the Cloud class, refer to arrivals of new clients. This is not the same as the request 10. The parameter to the Cloud class specifies the simulated time of day (just the hour, from 0 to 23). This is the value that getTime will return. It is not used in any other way. It is useful only to let your code know which part of the diurnal load curve to use to select the number of servers. When you test locally, you must provide a value to set the client arrival rate; it will not be set automatically based on the parameter.

$25.00 View

[SOLVED] 15.094 project 4- two-phase commit for group photo spring 2025 p0

Project 4: Two-phase Commit for Group Photo Collage Important Dates: Submission Limits: 10 Autolab submissions per checkpoint without penalty (5 additional with increasing penalty) You will learn to: 1. Implement a distributed transaction using two-phase commit 2. Deal with lost and delayed messages 3. Handle and recover from node crashes and network failures 4. Utilize logging to persistent storage for failure recovery Introduction At the dawn of photography, in the late 19th and early 20th century, group photos were a way of capturing the spirit of an important occasion in the lives of people. The technology was primitive by today’s standards: cameras were large and bulky, it took many seconds or tens of seconds to capture a picture, the chemical processes were slow and messy, and so on. At the same time, you could be sure that a group photo accurately captured reality since it was very difficult to forge the content of photographs. A group photograph was thus a true record of reality. A group photo of the Solvay conference in 1927 (Figure 1), shows Albert Einstein, Marie Curie, Paul Dirac, Werner Heisenberg, and many other familiar names from your physics textbooks.Figure 1. Solvay conference Figure 2. LinuxWorld awardWhile mobile computing and imaging technology has changed beyond recognition, people have not changed much. They still love to record important and fun occasions in their lives, spent in the company of their friends. A group photo in the 21st century looks very different (Figure 2), but is no less important in the lives of people.In this project, you will build a part of a system that generates and publishes group collages assembled from multiple images contributed by multiple individuals. The system consists of several UserNodes, each of which represents a single person’s smartphone or laptop, and a single Server, which coordinates the publication of collages. The process for publishing a collage follows these steps: 1. Someone constructs a candidate collage made from images shared by the UserNodes, and posts it to the Server. 2. The Server initiates a two-phase commit procedure, letting the users that contributed images to the collage examine it to see if they are happy with the result. 3. UserNodes either approve or disapprove of it. 5. Only if all UserNodes that contribute an image to the collage approve (i.e., the two-phase commit succeeds) the collage is published (written to the Server’s working directory). 6. A successful commit must produce particular side effects on the UserNodes as well. A UserNode is required to stop sharing (i.e., remove from its working directory) any of its images that appear in a published collage. This system needs to be robust to lost messages and to node failures / reboots. Furthermore, your system needs to be able to process multiple collage commits concurrently. Requirements/Deliverables We will provide: ● We provide several components for this project. These require that the project be implemented in Java, in a Unix / Linux environment, e.g., Andrew Unix servers. ● We will provide a Java class (Project4) that sets up the working environment and launches the system components (Server and UserNodes) as separate processes. ● We will provide you a Java class (ProjectLib) that provides a simple UDP-like datagram messaging service, a mechanism to get the next candidate collage (at the Server), and a method to ask a user if they approve the image (at a UserNode). You will create: ● You will create a class called “Server” that implements a main() method. The Server will coordinate and perform two-phase commits on candidate collage images. ● You will create a class called “UserNode” that implements a main() method. The UserNodes will participate in the two-phase commit started by the Server. Your code should do the following: ● Your Server class and UserNode class will implement main() methods. A single instance of Server and multiple instances of UserNode will be run by the Project4 class as separate processes at the beginning of a test. ● Your Server will be provided a single command line argument: . This specifies the port used by the Project4 class, and needs to be provided to the constructor of ● Your UserNode will be provided two command-line arguments: and . specifies the port used by the Project4 class. is the “address” of this UserNode instance. Both of these are passed to the constructor of ProjectLib. ● Your Server class, or another class it uses, needs to implement the CommitServing interface. This interface defines a single method (startCommit) that will be called to inform your Server that a new candidate collage has been posted, and to indicate which source images contributed to it. This should cause a new two-phase commit operation to begin. ● Both the Server and UserNode should instantiate ProjectLib. The Server should call the ● The Server and UserNodes must use ProjectLib’s send and receive methods to communicate. You must not use any other communication methods (e.g., Java RMI or sockets). You should not use the filesystem as a means to communicate or to perform actions on behalf of other entities. ● Your code should handle lost messages. You can assume that a message will take no more than 3 seconds to be delivered if it is not lost. ● A new candidate collage will be posted to your Server using the startCommit method. The arguments to this will provide the filename and contents of the collage, and an array of strings indicating the source images. Each string is of the form “address:filename”, where address is the id of the UserNode that owns the image, which has the given filename at that UserNode. ● A UserNode can ask the “user” if a candidate collage is acceptable using the askUser method of ProjectLib. If the return value is true, then the user is happy with the collage. ● A UserNode should ensure that each of its images can appear in no more than one committed collage. ● If a collage is committed, then it should be written to the Server’s working directory with the given filename. All UserNodes should remove the source images that contributed to the committed collage from their working directories. Submission and grading: ● You will be graded on the correctness of your system. The test cases will verify that the right set of collages are added to the Server working directory, and that the corresponding source images are removed from the UserNode working directories. ● Although performance is not critical, you should limit the number of messages sent. Too many messages will fail the test cases as well. ● This project will use an autograder to test your code. See below on how to submit. ● You need to submit a short (1-2 pages) document detailing your design. See below. ● The late policy will be as specified on the course website, and will apply to the checkpoints as well as the final submission ● Coding style should follow the guidelines specified on the course websiteCheckpoint 1 requires you to implement a two-phase commit to publish the collages. Your code should be able to handle both successful and unsuccessful commit operations. It should allow multiple candidate collages to be processed in parallel. Committed collages should be written to the Server’s working directory, and any images used in committed collages should be removed from the UserNode directories. Node failures or message failures will not be tested. The final submission needs to handle failures. It should be able to handle the loss of any messages. The Server or any of the UserNodes can be killed and restarted at any time. These components should recover in the manner described in lectures and this writeup. (40%)You will also need to write and submit a 1-2 page document, describing the major design aspects of your project, including your protocol between Server and UserNodes, timeout thresholds, how you handle lost messages, and how you recover from node failures. Highlight any other design decisions you would like us to be aware of. Please include this as a PDF in your final tarball. (10%)Your final source code will also be graded on clarity and style. (10%)Submission Process and Autograding We will be using the Autolab system to evaluate your code. Please adhere to the following guidelines to make sure your code is compatible with the autograding system.First, untar the provided project 4 handout into a private directory not readable by anyone else (e.g., ~/private in your AFS space): cd ~/private; tar xvzf ~/15440-p4.tgz This will create a 15440-p4 folder with needed libraries, classes, and test tools. You should create your working directory in the 15440-p4 directory. It is important that from your working directory, the provided java classes should be available at ../lib.Write your code and Makefile in your working directory. You must use a makefile to build your project. See the included sample code for an example. You will need to add the absolute path of your working directory and the absolute path of the lib directory to the CLASSPATH environment variable, e.g., from your working directory: export CLASSPATH=$PWD:$PWD/../lib Ensure that by simply running “make” in your working directory, your Server, UserNode, and support classes are built. Please use the names “Server” and “UserNode” for the two required classes. Make sure the java and generated .class files are in your working directory (i.e., not in a subdirectory). Your Server and UserNode classes should implement main. Do not place your classes in a java package! Leave them in the default package. This naming convention and relative file locations are critical for the grading system to build and run your programs.To hand in your code, from your working directory, create a gzipped tar file that contains your makefile and sources. E.g., tar cvzf ../mysolution.tgz Makefile Server.java UserNode.java … Of course, replace these with your actual files, and add everything you need to compile your code. If you use subdirectories and/or multiple sources, add these. Do not add any files generated during compilation (e.g. the .class files) — just the clean sources. Also, do not add the class files that we have provided — these will be installed automatically when grading. To work correctly with Autolab, when extracted, your tarball should put the Makefile and sources in the current working directory.You can then log in to https://autolab.andrew.cmu.edu using your Andrew credentials. Submit your tarball (mysolution.tgz in the example above) to the autolab site. Note that each checkpoint shows up as a separate assessment on the Autolab course page. For your final submission, include your write up as a PDF file in your tarball.Recovering From Failures and Lost MessagesHow to Use the Supplied Classes and Files ProjectLib Class We will provide a class called ProjectLib. This is the main library that you will use for this project. It provides several methods that emulate OS functions that are needed for communications and performing transactions locally.Communications are based on a unidirectional, unreliable datagram service, conceptually similar to UDP. ProjectLib.Message is a simple message class that includes an address (String) and a message body (byte array). When sending a message, the address should be set to the destination node’s id; when receiving, it contains the sender’s node id. ProjectLib provides a sendMessage() method for sending a message to another node. No information about success / failure is provided by this method. A getMessage() method is a blocking call to pull the next message out the node’s receive queue. The receive queue is FIFO, but since messages can be delayed or dropped before being put in the queue, order of receipt is not guaranteed. Messages are never corrupted — only delayed or lost.Instead of using the receive queue and the blocking getMessage call, your code can use a callback mechanism to receive messages. To do this, you need a class that implements the ProjectLib.Messagehandling interface, which defines the callback function prototype. To register the callback, simply supply a reference to this class as an additional parameter to the ProjectLib constructor (see below).Project4 class The Project4 class is used to start the test environment, launch your Server and UserNodes, and run through a test scenario. To run the program, ensure your CLASSPATH is set correctly (and includes absolute paths to both the lib directory and the directory with your compiled classes), then execute: java Project4 Here, specifies the port that it will use internally to communicate between components. The contains a set of commands used in a particular test, indicating the set of UserNodes to start, candidate images to commit, which messages are delayed or dropped, and which nodes if any fail or are restarted. Project4 will launch your Server and UserNodes as separate processes, running in separate working directories. It expects these directories have already been created, and pre-populated with the image files at each node. Project4 will produce a unified stdout/stderr from all of the processes launched. Test.tar We provide a set of test images and scripts in test.tar. Please untar this to create a directory called test. This will contain a set of subdirectories that will contain images and will be the working directories for your UserNodes. There will be an empty directory for the Server. In addition, a set of candidate collage images will be provided, along with a set of scripts for running Project4. To run one of the scripts, cd into the test directory, and run Project4 (see above), supplying the appropriate script file on the command line. Running Project4 will clutter the test directory — your Server and UserNodes will create and delete files, and the fsync() operation will create backup copies of file state. To run another test, we recommend that you recursively delete the entire test directory, and extract a clean copy from test.tar. Custom Script files We provide a few example scripts in Test.tar to get you started. We encourage you to construct your own scenarios and write corresponding script files for testing your system. The script file is a simple text file that lists a sequence of commands that the Project4 class will execute. The class uses a global notion of time (specified in milliseconds). Lines starting with ‘#’ are comments and are ignored. Likewise, blank lines are ignored. The test ends when the end of the script file is reached. All other lines must be one of the following commands: ● start … Start / restart the node(s) named in the list. You should have one node named “Server”, which will run your Server code. Other nodes can be named anything. These will run your UserNode code. The nodes will run in subdirectories of the same name. When restarting a node, the state of the directory will be set to what it was when the last call to fsync was made. ● kill … Immediately kills the node(s) in the list. ● restart … Equivalent to kill followed by start. ● commit … Tells the Server to start a 2PC for a new collage. is of the form “name=path” or “path”. Name is optional, and indicates the filename of the collage output image that will be written to the server’s directory on successful commit. Path is the actual image corresponding to the collage. , , etc. are the list of the contributing source images. Each is of the form “id:filename”, indicating the node id and the name of the image file in that node’s directory. ● setDelay sets the one-way message delay from to to be milliseconds. Note that this is not symmetric – if you want both directions to be changed, you need to set each direction separately. The node ids can also be “*”, which indicates any node. E.g., to set the default delay (i.e., any node to any node) to 100 ms, use: setDelay * * 100 To indicate messages are to be dropped, set to -1. E.g., to start dropping messages from B to the Server, use: setDelay B Server -1 These commands take effect immediately and are applied to any matching messages sent afterwards (not those in flight). ● wait Pause the script for milliseconds, allowing the server and usernodes to run / make progress. Without the addition of waits, commands are executed back to back with no delay (effectively, they happen all at once). You should put wait commands after start commands to give nodes a chance to start running. Likewise, remember to put a wait at the end of the script to let execution complete. At the end of the script, Project4 terminates, immediately killing all of the nodes as well. It is recommended that you set a default message delay right at the beginning of your script. Remember to add waits between commands that should be spaced in time, and to add a wait at the end to allow execution to complete.Notes / Hints● You should not use any form of communication between your nodes except through the ProjectLib messaging services. This means no RMI, no sockets, no sharing files or peeking into the working directories of other processes, etc. ● You can use the asynchronous callback mechanism and the blocking getMessage() to receive messages at the same time. When a message arrives, your callback will be called. If it does not want / know what to do with the message, it can return false, and the message will be put into the receive queue, where the blocking getMessage method can retrieve it. ● Recall that two-phase commit does not provide guarantees on termination. To avoid blocking forever, you will need to implement timeouts for various parts of your two-phase commits. You will need to determine reasonable timeout intervals for your system. You can assume that a message that is not lost is guaranteed to arrive within 3 seconds of being sent. ● Because your code is expected to write and delete files to a set of directories, it is recommended that you extract a fresh copy of the test.tar file for each test run.

$25.00 View

[SOLVED] 15-440 project 2- file-caching proxy spring 2025 p0

Project 2: File-Caching Proxy Important Dates: Submission Limits: 15 Autolab submissions per checkpoint without penalty (5 additional with increasing penalty)Important Guidance This project will need to be done in Java 8, in a 64-bit Linux environment, e.g., Andrew Unix servers. It will not run on Windows or Mac !!!Keep your AFS space secured. You will design and implement a server that lets a remote client read, modify, and delete files in your file space with no security provisions! Please be careful, and do not leave your servers running.This project is about design. There are many things unspecified. You will have to make wise choices that balance implementation complexity and performance, while meeting all constraints that are specified. What you will learn from this project You will learn how to: ● Design a caching protocol ○ Which robustly handles multiple concurrent clients ○ Which ensures open-close session semantics on concurrent file access ○ Which uses LRU to manage a fixed size cache containing variable sized files ● Design and implement a distributed system ○ For whole file caching ○ Using Java RMI, Java threading, and concurrency management techniques ○ Which emulates C file operations on locally-cached files1. Introduction Caching is a great technique for improving the performance of a distributed system. It can help reduce data transfers, and improve the latency of operations. This project extends the simple remote file operations system we built in Project 1 to now perform caching as well. This project will continue to use existing binary tools (e.g., 440read) and interpose on their C library calls. We will provide a working interposition library. However, instead of connecting directly to a server, this library will connect to a caching proxy, which you will write. This proxy will in turn connect to the server, which you will write as well.The proxy will handle the RPC file operations from the client interposition library. It will fetch whole files from the server, and cache them locally. You need to define the protocol used between the proxy and the server, and how the cache will be managed. Remember, this protocol should offer open-close session semantics on whole files. Therefore, the protocol should not be implemented at the level of individual read, write, and seek operations.In addition to the client interposition library, we will provide a Java class to handle the low-level RPC serialization with the client library. It will call your code to actually perform the operations. You will write a class that will implement an interface providing methods for open, read, write, … operations that behave similar to the C functions. Your proxy code will be responsible for fetching and caching needed files from the server.2. Requirements/Deliverables We will provide: ● We provide several components for this project. These require that the project be implemented in Java, in a 64-bit Linux environment, e.g., Andrew Unix servers ● We will provide you a set of binary tools that perform file I/O using C library calls ● We will provide a client library to interpose on C library file operations, and transform them into RPCs to the proxy ● We will provide you a Java class (RPCreceiver) that implements the stubs for the RPCs from the client library. This class will perform data serialization and deserialization, transform data into a more Java-friendly form, and call methods you provide to actually execute the operations. You will create: ● You will create a server that stores and provides access to a set of files. Your server should operate at the granularity of files (not individual operations), and handle concurrent connections from multiple proxies. ● You will create a proxy that sits between the client and the server. It should use the RPCreceiver class we provide to communicate with the clients. It will handle the file operation RPCs from one or more concurrent clients, fetch needed files from the server, and push modifications back to the server. It will provide C file operation semantics (open, read, write, …, use of file descriptors, maintain current position in file, etc.) to the clients, but talk to the server at a whole-file granularity. It will also perform caching to reduce latency and total data transfers. Your code should do the following: ● Your proxy needs to emulate the following C library calls (with some simplifications and modifications, see below): open, close, read, write, lseek, unlink ● You are free to design your own protocol between proxy and server. However, you must implement whole file caching, some method of ensuring caches are not stale, and LRU eviction/replacement policy. Document your design choices. ● Your code should allow concurrent read and write access to the same file by different clients, but should ensure that while a client has a file open, it will see a stable version (i.e., it will not be affected by modifications or deletions caused by other clients). ● The semantics of your system should not change depending on whether concurrent clients are connected to the same proxy or to different ones. ● You are required to use Java RMI for the RPCs between your proxy and server. ● Your proxy will be provided four command line arguments: and . and are the server’s IP address and the port it uses (i.e., the Java RMI registry’s port). is a directory your proxy must use to store its cache data. The actual files and formats are totally up to you, but the total contents should not exceed , the size of the cache in bytes. ● Your proxy should implement an LRU cache replacement policy. Submission and grading: ● You will be graded primarily on correctness of the file operations and caching. Performance is secondary, but will also be tested. ● This project will use an autograder to test your code. See below on how to submit. ● You need to submit a short (1-2 pages) document detailing your design. See below. ● The late policy will be as specified on the course website, and will apply to the checkpoints as well as the final submission ● Coding style should follow the guidelines specified on the course websiteCheckpoint 1 requires that the client-facing aspects of your proxy are working properly. In particular, all of the file operations need to work properly, providing expected outputs and error codes. Just for this checkpoint, there is no server. Instead of fetching files from the server, the proxy will serve the files in its working directory (imagine this is a prepopulated / warmed cache). Thus, you will not implement a server for this checkpoint, and the proxy does not need any command line arguments. You will not need to worry about cache size limits. However, your proxy must be able to handle concurrent clients.Checkpoint 2 requires you to implement the server and design your caching protocol. Your system should be able to perform basic read caching of whole files and be able to push modifications to the server. However, interactions between reads and writes, and invalidation of cache entries will not be tested. Your proxy should accept the command line parameters specified in the requirements, but cache size limits will not need to be implemented for this checkpoint. Your server should be able to handle multiple concurrent proxies.You will also need to write and submit a 1-2 page document, describing the major design aspects of your project, including the protocol between proxy and server, consistency model as seen by the clients, how you implemented LRU replacement, techniques used to ensure cache freshness, and how these affect your system’s performance. Highlight any other design decisions you would like us to be aware of. Please include this as a PDF in your final submission tarball. (10%)Your final source code will also be graded on clarity and style. (10%)3. Submission Process and AutogradingWe will be using the Autolab system to evaluate your code. Please adhere to the following guidelines to make sure your code is compatible with the autograding system.First, untar the provided project 2 handout into a private directory not readable by anyone else (e.g., ~/private in your AFS space): cd ~/private; tar xvzf ~/15440-p2.tgz This will create a 15440-p2 folder with needed libraries, classes, and test tools. You should create your working directory in the 15440-p2 directory. It is important that from your working directory, the provided library and java classes should be available at ../lib.Write your code and Makefile in your working directory. You must use a makefile to build your project. See the included sample code for an example. You will need to add the absolute path of your working directory and the absolute path of the lib directory to the CLASSPATH environment variable, e.g., from your working directory: export CLASSPATH=$PWD:$PWD/../lib Ensure that by simply running “make” in your working directory, both your proxy and server classes are built. Please name the class implementing your proxy “Proxy” and the class for your server “Server”. Make sure the .java and generated .class files are in your working directory (i.e., not in a subdirectory). Both of these classes should implement main. Do not place your classes in a java package! Leave them in the default package. This naming convention and relative file locations are critical for the grading system to build and run your programs.To hand in your code, from your working directory, create a gzipped tar file that contains your make file and sources. E.g., tar cvzf ../mysolution.tgz Makefile Proxy.java Server.java Of course, replace these with your actual files, and add everything you need to compile your code. If you use subdirectories and/or multiple sources, add these. Do not add any files generated during compilation (e.g. the .class files) — just the clean sources. Also, do not add the class files, .so file, or binary tools that we have provided — these will be installed automatically when grading. To work correctly with Autolab, when extracted, your tarball should put the Makefile and sources in the current working directory.You can then log in to https://autolab.andrew.cmu.edu using your Andrew credentials. Submit your tarball (mysolution.tgz in the example above) to the autolab site. Note that each checkpoint shows up as a separate assessment on the Autolab course page. For your final submission, include your write up as a PDF file in your tarball.4. How to Run the ProjectWe provide a set of binary tools that make use of low level file operations using the standard C library. These are a subset of the tools we used in Project 1. You should be able to run these on the local file system — under the same conditions, using your caching remote file system should provide the same results. You can (and are encouraged to) create additional test programs to really test the corner cases to make sure you will pass the hardest tests we can think of.To run these tools on the remote file system, we interpose on the C library calls using LD_PRELOAD. We have provided the interposition library, lib440lib.so; this is used just like mylib.so in Project 1: LD_PRELOAD=../lib/lib440lib.so ../tools/440read foo The lib440lib.so will connect to the proxy. You should set the environment variable proxyport15440 to indicate which port to use. We have provided a Java class, RPCreceiver, that implements the proxy’s side of the client RPCs. It, too, expects the proxyport15440 environment variable to be set. Both use a shared secret pin to authenticate connections. Set the pin15440 environment variable to a secret 9-digit pin of your choice on both the server and the client.Arguments to your proxy will be provided like in this example: java Proxy 127.0.0.1 11122 /tmp/cache 100000 with a server address of 127.0.0.1, port 11122, cache directory /tmp/cache, and 10^5 byte cache size limit. Server arguments will be provided like in this example: java Server 11122 fileroot with a server port of 11122 and serving files in fileroot.5. Classes You Need to WriteYou will write several classes to implement your proxy. You should provide a class named Proxy that implements main(). This takes 4 command line arguments specified in the requirements. To use the RPCreceiver class, instantiate a new RPCreceiver, and call run(). Since RPCreceiver implements Runnable, you can also start it as a separate thread. RPCreceiver listens on the proxy port, launching additional threads to handle clients.You need to create a class that implements the FileHandling interface we have defined. This is a set of Java methods that correspond to the file operations you need to support. RPCreceiver will call your class methods to actually do the work for the RPCs. One additional method, clientdone() is used to tell your code that the client has gone away (so you can clean up any state).RPCreceiver will need an instance of your FileHandling class for each client. You need to provide a “factory” class to the constructor of RPCreceiver. The sole purpose of your factory class, which must implement the FileHandlingMaking interface, is to return new instances of your FileHandling class. For each client that connects, RPCreceiver will call newclient() on your factory to get a new instance of your FileHandling class, and then spawn a new thread to handle the client. All RPC operations from a particular client will invoke the FileHandling methods on the associated class instance. Your code needs to be thread safe and handle multiple clients concurrently.For your server, you need to create a class called Server that implements main(). It should take two command line parameters, a port and the root directory of the initial set of files to serve. You are free to define the protocol between your proxy and server, but it should be at the level of whole files, not individual operations, and make use of Java RMI for RPCs. You should instantiate a Java RMI registry that listens on the specified port. Your server must handle multiple proxies concurrently. Your server will not use any of the supplied libraries or classes.6. FileHandling interfaceThe FileHandling interface we provide describes the set of Java methods you will need to implement to service the client file operation RPCs. The six main methods correspond directly to the C library file operations open, close, etc., with some minor modifications. First, the interface is simplified to use Java constructs. Instead of null-terminated character arrays, all paths are specified as Strings. For operations requiring a (void*) buffer and a length, the corresponding Java method will take a byte array as an in/out parameter.Secondly, the options for open have been simplified. Instead of the set of option flags (some combinations of which are not meaningful), the open method will take only one of four options: READ (read-only), WRITE (read/write), CREATE (read/write, create if needed), and CREATE_NEW (read/write, but file must not already exist). These are defined in the OpenOption enum. Furthermore, the mode_t bits are not used for this project.Ensure your implementation adheres to the operation semantics in the man pages, except for the modifications noted above. Remember that read and write should return the actual number of bytes read/written on success, while lseek returns the new position in the file.Finally, the clientdone method is not an actual file operation, but is a way for RPCreceiver to let your class know when a client has left, so you can clean up any state.7. Notes / Hints● Do not hardcode the server IP address / port in your proxy or server. Instead, use the IP address and port specified as command line parameters (see requirements). Remember to set the environment variable proxyport15440 so lib440lib.so can find your proxy, and the RPCreceiver class can listen on the right port. ● Remember that with whole file caching, once a reader opens a file, it should see the same consistent view of the file, even if it is deleted or changed elsewhere. However, the updated version should be visible to clients that open the file subsequently. For example, if client A has the file open while client B writes to the file, then A must continue to see the old version. However, another client C, which opens the file after B writes and closes it, must see B’s modifications. (Note: this is slightly different from AFS, where clients on the same machine directly access and share a copy of the cached file and can see modifications of others on the same physical machine). ● Remember to test cases where concurrent clients connect to the same proxy and where they connect to different proxies. Ensure that the behavior remains exactly the same. (Again this is different from the behavior of AFS.) ● You have some flexibility in how you want to handle concurrent writes to the same file. Hint: AFS-style concurrency or write locks / leases are acceptable, as long as readers are never blocked. ● Make sure that (a) caching works, and (b) your protocol between proxy and server is efficient. In other words, be careful not to defeat the benefits of caching by having a very chatty protocol that increases latency!

$25.00 View

[SOLVED] 15.094 project 1-transparent remote file operations

Project 1: Transparent Remote File Operations Important Dates: Submission Limits: 15 Autolab submissions per checkpoint without penalty (5 additional with increasing penalty) Important Guidance This project will need to be done in C, in a 64-bit Linux environment, e.g., Andrew Unix server. It will not run on Windows or Mac !!! You will implement a server that lets a remote client read, modify, and delete files in your file space with no security provisions! Please be careful, and do not leave your servers running. Introduction In distributed computing systems, it is often necessary to make use of resources located on remote machines. We can write our distributed applications to use the network to communicate between different components running on different machines. However, it becomes very tedious and inelegant to insert ad-hoc networking code every place our software needs to access remote resources. Instead, we can use layering to hide the network complexities, and provide a clean abstraction for remote resource access. In particular, in this project we will use remote procedure calls (RPCs) to provide access to remote services, but with an interface identical to local services. We will then use this to let an unmodified program access the remote service by interposing the remote procedure calls in place of the local service routines. In this project, you will build an RPC system to allow remote file operations (open, read, write, …). This will include a server process and a client stub library. To test your remote file access system, you will use existing programs (440cat,440ls, …), but interpose your RPC stubs in place of the C library functions that handle file operations. So it is critical to make your RPC abstraction look as close to local file operations as possible! At the end of this project, you should be able to execute a command like “440cat foo” but instead of opening and printing the contents of a local file, it will access the contents of file foo on the remote server machine. Requirements/Deliverables ● You will create a server process to provide the remote file services. ● You will create a client stub library to perform RPCs. ● Your code needs to handle the following standard C library calls: open, close, read, write, lseek, stat, unlink, getdirentries ● Your code will also need to handle the non-standard getdirtree and freedirtree calls. We will provide the header and shared library implementing the local version of these functions (see below for a description). ● You will need to write code to marshall and unmarshall parameters and return values, and to maintain any state. You will not be using an IDL or stub generator. ● You are free to design your own protocol for the messages between client and server. ● We will provide sample code for the interposition to get you started. ● We will provide simple applications to test local and RPC file operations (440cat, 440ls, 440read, 440write, 440tree) ● This project will use an autograder to test your code. See below on how to submit. ● You will need to write a short (1 page) document detailing your design. See below. ● The late policy will be as specified on the course website, and will apply to the checkpoints as well as the final submission ● Coding style should follow the guidelines specified on the course website Checkpoint 1 is to make sure you get a good start on the basic interposition and networking aspects of this project. For this, start with the example interposition code provided, and extend it to cover all of the required functions. This version will simply “pass through” the calls locally to the standard library. You will then extend this further to log all of the interposed calls to a remote server. You will write a simple TCP server to print out the names of the functions that were called to standard out. So, running (assuming bash shell syntax): LD_PRELOAD=./mylib.so ../tools/440cat foo should produce output at your server like this: open read read close For checkpoint 1, your server should not produce any other output, as this output will be autograded. This means no debugging output, no extra white spaces or newlines, etc. If you are producing debugging output, use stderr, not stdout. Likewise, do not print anything to stdout from your interposition library, as it will interfere with the client program output. No actual RPCs need to be implemented for this checkpoint, and your server does not need to handle concurrent clients, etc. Fully implement all of the standard C file operations as well as getdirtree and freedirtree (see description in “New Operations” on page 4). All of these must be implemented as RPCs. Your server will need to handle multiple concurrent clients, and handle clients that use more than one file at a time. Your code should report errors properly (e.g., file not found) to the client programs. (40%) You will also need to write and submit a 1 page document, describing the serialization protocol you used to transfer data structures between server and client. Highlight any other design decisions you would like us to be aware of. Please submit this as a PDF. (10%) Your final source code will also be graded on clarity and style. (10%) Submission Process and Autograding We will be using the Autolab system to evaluate your code. Please adhere to the following guidelines to make sure your code is compatible with the autograding system. First, untar the provided project 1 handout into a private directory not readable by anyone else (e.g., ~/private in your AFS space): cd ~/private; tar xvzf ~/15440-p1.tgz “mylib.so” and your server “server” and make sure both are in your working directory (i.e., not in a subdirectory). This naming convention and relative file locations are critical for the grading system to build and run your programs. To hand in your code, from within your working directory, create a gzipped tar file that contains your Makefile and sources. E.g., tar cvzf ../mysolution.tgz Makefile mylib.c server.c Of course, replace these with your actual files, and add everything you need to compile your code (other than the files we provide). If you use subdirectories and/or multiple sources, add these. Do not add any intermediate files (e.g., .o files) or binaries generated during compilation – just the clean sources. Also, do not add the header, .so file, or binary tools that we have provided — these will be installed automatically when grading. Your Makefile should expect the header in ../include, and the .so in ../lib. To work correctly with Autolab, when extracted, your tarball should put the Makefile and sources in the current working directory. Then, log in to https://autolab.andrew.cmu.edu using your Andrew credentials. Submit your tarball (mysolution.tgz in the example above) to the autolab site. Note that each checkpoint shows up as a separate assessment on the Autolab course page. For your final submission, include your write up as a PDF file in your tarball. Background: Unix file operations To access files using the low level Unix/POSIX interfaces, you first open the file, e.g.: fd = open(“foo”, O_READ); // open file foo with // flags indicating read only flags indicates options: read-only, read-write, create, zero out (truncate) before writing, etc. The return value is a file descriptor or indicates an error. This descriptor is used in further accesses, e.g.: rv = read(fd, buf, 1024); // read up to 1024 bytes into buf rv = write(fd, buf2, 1024); // write 1024 bytes from buf2 The return value is the actual bytes read / written or an indicator of error. The system keeps track of position in the file, which can be changed with lseek. Finally, file is closed: close(fd); All of the operators return a negative value to indicate error. The actual error code is then available in errno. Please see man pages for open(2), … for more details. New Operations We have created a new, nonstandard library function, getdirtree. This function recursively descends a directory hierarchy, and constructs a tree data structure that represents the directory tree. The tree node data structure and prototype for this function are defined as: struct dirtreenode { const char *name; // name of the directory int num_subdirs; // number of subdirectories struct dirtreenode **subdirs; // pointer to array of }; // dirtreenode pointers, one for each subdirectory struct dirtreenode* getdirtree( char *path ); The getdirtree function allocates the dirtreenode structures representing the directory hierarchy rooted at path, and returns a pointer to the root of the tree. If there was an error, NULL is returned, and an error code placed in errno. We also have implemented freedirtree function that recursively frees the memory allocated for the tree data structures. We provide these functions as the libdirtree.so shared library and the prototypes in dirtree.h. The supplied tool, 440tree, uses this function and shared library. Your final RPC mechanism needs to encapsulate all of these operations on the server, and provide stubs to access these in the client library. You do not need to handle more advanced features such as fcntl, locking, dup, handling forks, etc. However, your server needs to be able to handle file accesses from multiple clients at once, and both the server and client library need to be able to handle multiple open files. Tutorial: C Networking Note that the following sample code snippets hardcode both the IP address and port number. Do not do this in your code. Instead, read the IP address from the environment variable “server15440” and port number from “serverport15440” (see starter code for examples). TCP server This project requires you to write a multi-client network server. The networking functions in C follow the same template as the file operations, but are based around the concept of a socket, rather than a file, and involve more steps and magic incantations. To create a socket: sockfd = socket(AF_INET, SOCK_STREAM, 0); // TCP/IP socket Then, we need to “bind” our socket to a port — our server will use this port exclusively. To do this, we first populate an ugly address structure and call bind: struct sockaddr_in srv; // address structure memset(&srv, 0, sizeof(srv)); // zero it out srv.sin_family = AF_INET; // will be IP address and port srv.sin_addr.s_addr = htonl(INADDR_ANY); // don’t care srv.sin_port = htons(15440); // port 15440 rv = bind(sockfd, (struct sockaddr *) &srv, sizeof(struct sockaddr)); // bind to the specified port Check the return value of bind, because it can fail for many reasons (e.g., another program is using the port, left over connection state from previous runs need to time out, etc.). Then, we set up a queue to listen for incoming clients: listen(sockfd, 5); // listen for clients, queue up to 5 Then, we get the connections from clients: struct sockaddr_in client; socklen_t sa_size = sizeof(struct sockaddr_in); sessfd = accept(sockfd, (struct sockaddr *) &client, &sa_size); The accept call blocks until a client has connected. On success, it returns a new session socket used to communicate with that client. You call it again to get the next client, and another new session socket. The client address information is provided in the address structure. We can then communicate with the client using the session socket: rv = send(sessfd, buf, 1024, 0); // send 1024 bytes from buf rv = recv(sessfd, buf2, 1024, 0); // receive up to 1024 // bytes into buf2 The return values indicate error or actual bytes transferred. Error codes will indicate if the client dropped or disconnected. The recv call is blocking. When we are done with this client: close(sessfd); // done with this client Note that this is the same close function used for file I/O ! In fact, read and write will generally work on the session sockets. Of course, lseek does not make sense here. When completely done, we should close the original server socket: close(sockfd); // server is done Your server needs to be able to handle more than one client at once. This gets tricky, because both the accept call to get new clients and the recv calls to get data from existing clients are blocking. There are many ways to handle this issue. One simple but inefficient way to do this is to fork a child process to handle each client session. The fork call will create an exact duplicate the calling process, except the original will get the process id of the copy as a return value, while the copy gets a return value of 0: sessfd = accept(sockfd, (struct sockaddr *) &client, &sa_size); rv = fork(); if (rv==0) { // child process close(sockfd); // child does not need this do_stuff(sessfd); // handle client session close(sessfd); // then close client session exit(0); // then exit } // parent process close(sessfd); // parent does not need this // should loop back to accept next client This simple structure will have a single server process that listens for new clients, then launches additional processes to handle each client separately. TCP client The client code is a bit simpler. First, create a socket: sockfd = socket(AF_INET, SOCK_STREAM, 0); // TCP/IP socket Then, we connect to the server, by populating an address structure and calling connect: struct sockaddr_in srv; // address structure memset(&srv, 0, sizeof(srv)); // zero it out srv.sin_family = AF_INET; // will be IP address and port srv.sin_addr.s_addr = inet_addr(“127.0.0.1”); // server IP srv.sin_port = htons(15440); // port 15440 rv = connect(sockfd, (struct sockaddr *) &srv, sizeof(struct sockaddr)); // connect to server Check the return value to make sure connection succeeds. Now we can use send,recv, and close as above: rv = send(sockfd, buf, 1024, 0); // send 1024 bytes from buf rv = recv(sockfd, buf2, 1024, 0); // receive up to 1024 // bytes into buf2 close(sockfd); // client is done Tutorial: Library call interposition Most programs in modern systems use dynamic or shared libraries — the library functions are not part of the executable binary, and instead are loaded at run time. This gives us the opportunity to interpose on standard library calls from existing programs. In Linux and other Unix-like systems, we do this by creating our own shared library implementing functions we want to interpose, and telling the system to load our library first when executing a program. For example, suppose we want to track the memory allocations performed by some program myprog. When we normally run myprog, the system will load the libraries it needs, and dynamically link in the library functions used. In particular, standard C functions, such as malloc, will be loaded from libc.so. Instead of making use of the libc functions, we can create our own version of malloc() and free() that will log all calls. We then compile them into a shared library (“.so” file in Linux). We use the “LD_PRELOAD” environment variable to tell the system to load our library first when executing myprog, e.g., assuming bash shell: LD_PRELOAD=./mylib.so myprog Now, when myprog is run, the system links in our functions first, before loading any other libraries. The unmodified myprog program will now use our version of malloc and free, instead of the ones from the standard C library. You will have to create a shared library that implements versions of the file operations (open, read,…) that communicate to your server to perform the operations at the remote machine rather than locally, and interpose these into existing binary programs that make local calls. If we can interpose on any dynamically-linked binary, why are custom 440* tools needed? The 440 tools are written to directly use open, close, read, etc. In contrast, most of the standard, small Unix tools do not actually use the open, close, … file APIs; rather they use the richer buffered file API that includes fopen, fprintf, etc. This family of functions is much larger and more complex. Internally, these functions do call the equivalent of open, close, read, etc., but use functions internal to the C library, or perhaps direct OS system calls. The linker-level interposition we use here cannot catch such calls made from within the C library. Notes / Hints ● Do not print anything to stdout from your interposition library, or extra output (extra white space or newlines) from your server; otherwise your code will fail in the autograder. Use stderr if you are printing any debugging output. ● Check the man pages for the file operations to make sure you are implementing the right behaviors and returning the right values and error codes (e.g., man 2 read). ● Watch out for stat. On some versions of Linux (more specifically, those with older versions of the glibc library), program binaries actually link to __xstat, not stat. So if you are having trouble interposing on stat, check to see if the system uses __xstat, which has an additional parameter for version. The standard Andrew Linux machines and Autolab should use stat. If you interpose on both, your code should work on any Linux version. need to add the null terminator at the receiving end, or ensure it gets sent as well. ● Do not hardcode the server IP address or port in your client library or server. Instead, read the IP address from the environment variable “server15440” (as in the example client.c). Read the port number from “serverport15440” (for both the client and server). ● You will need to add the absolute path of the directory containing libdirtree.so to your LD_LIBRARY_PATH environment variable in order to run the 440tree program. Otherwise, you will get an error that it could not load the shared library. ● Make sure you develop your code in a private directory. By default, AFS home directories are readable by anyone. Please place your coursework in a private directory (e.g. ~/private), or change the permissions on your home directory. ● It is recommended you use send and recv for networking from your interposition library. Although read and write can be used for networking, you will have complications as you have interposed on these calls! ● On a related note, the read, write, and close operations can also be used for other I/O ● Does freedirtree really need to be an RPC?

$25.00 View

[SOLVED] Cse340 fall 2025 project 1- a simple compiler!

I will start with a high-level description of the project and its tasks, and in subsequent sections I will give a detailed description on how to achieve these tasks. The goal of this project is to implement a simple compiler for a simple programming language. To implement this simple compiler, you will write a recursive-descent parser and use some simple data structures to implement semantic checking and execute the input program. The input to your compiler has four parts:Your compiler will parse the input and produces a syntax error message if there is a syntax error. If there is no syntax error, your compiler will analyze semantic errors. If there are no syntax and no semantic errors, your compiler will perform other semantic analyses if so specified by the tasks numbers in the TASKS section. If required, it will also execute the EXECUTE section and produces the output that should be produced by the OUTPUT statements.The remainder of this document is organized as follows.Note: Nothing in this project is inherently hard, but it is larger than other projects that you have done in the past for other classes. The size of the project can make it feel unwieldy. To deal with the size of the project, it is important to have a good idea of what the requirements are. To do so, you should read this document a couple of times. Then, you should have an implementation plan. I make the task easier by providing an implementation guide that addresses some issues that you might encounter in implementing a solution. Once you have a good understanding and a good plan, you can start coding.The input of your program is specified by the following context-free grammar:The code that we provided has a class LexicalAnalyzer with methods GetToken() and peek(). Also, an expect() function is provided. Your parser will use the functions provided to peek()) at tokens or expect() tokens as needed. You must not change these provided functions; you just use them as provided. In fact, when you submit the code, you should not submit the files inputbuf.cc, (inputbuf.h, lexer.cc or lexer.h on gradescope; when you submit the code, the submission site will automatically provide these files, so it is important not to modify these files in your implementation.To use the provided methods, you should first instantiate a lexer object of the class LexicalAnalyzer and call the methods on this instance. You should only instantiate one lexer object. If you try to instantiate more than one, this will result in errors.The definition of the tokens is given below for completeness (you can ignore it for the most part if you want).What you need to do is write a parser to parse the input according to the grammar and produce a syntax error message if there is a syntax error. Your program will also check for semantic errors and, depending on the tasks list, will execute more semantic tasks. To achieve that, your parser will store the program in appropriate data structures that facilitate semantic analysis and allow your compiler to execute the statement list in the execute_section. For now, do not worry how that is achieved. I will explain that in detail, partly in this document and more fully in the implementation guide document.The following are examples of input (to your compiler) with corresponding outputs. The output will be explained in more detail in later sections. Each of these examples has task numbers 1 and 2 listed in the tasks_section. They have the following meanings:TASKS1 2POLYEXECUTEX = F(4); Y = G(2); OUTPUT X;OUTPUT Y;INPUTS1 2 3 18 19This example shows two polynomial declarations and a EXECUTE section in which the polynomials are evaluated with arguments 4 and 2 respectively. The output of the program will be17 3The sequence of numbers at the end (in the input_section) is ignored because there are no INPUT statements.TASKS1 2POLYEXECUTEINPUT X; INPUT Y;X = F(X); Y = G(Y);OUTPUT X;INPUTS1 2 3 18 19This is similar to the previous example, but here we have two INPUT statements. The first INPUT statement reads a value for X from the sequence of numbers and X gets the value 1. The second INPUT statement reads a value for Y which gets the value 2. Here the output will be 2Note that the values 3, 18 and 19 are not read and do not affect the execution of the program.1:          TASKS2:                  1 23:          POLY4:                     F = x^2 + 1;5:                    G = x + 1;6:          EXECUTE7:               INPUT X; 8:  INPUT Y;9:               X = F(X); 10:           Y = G(Y); 11:  OUTPUT X;12: INPUTS13:               1 2 3 18 19Note that there are line numbers added to this example. These line numbers are not part of the input and are added only to refer to specific lines of the program. In this example, which looks almost the same as the previous example, there is a syntax error because there is a missing semicolon on line 4. The output of the program should be SYNTAX ERROR !!!!!&%!!1:          TASKS2:                  1 23:          POLY4:                     F = x^2 + 1;5:                        G(X,Y) = X Y^2 + X Y;6:          EXECUTE7:               INPUT Z; 8:  INPUT W;9:                  X = F(Z);10:               Y = G(Z,W);11:             OUTPUT X;12:             OUTPUT Y;12: INPUTS13:               1 2 3 18 19In this example, the polynomial G has two variables which are given explicitly (in the absence of explicitly named variables, the variable is lower case x by default). The output is2 61:          TASKS2:                  1 23:          POLY4:                     F = x^2 + 1;5:                        G(X,Y) = X Y^2 + X Z;6:          EXECUTE7:               INPUT Z; 8:  INPUT W;9:                  X = F(Z);10:               Y = G(Z,W);11:             OUTPUT X;12:             OUTPUT Y;12: INPUTS13:               1 2 3 18 19This example is similar to the previous one but it has a problem. The polynomial G is declared with two variables X and Y but its equation (called poly_body in the grammar) has Z which is different fromX and Y. The output captures this error (see below for error codes and their format) Semantic Error Code 2: 5The task numbers specify what your program should do with the input program. Task 1 is one of the larger tasks and, but it is not graded as one big task. Task 1 has the following functionalities:The other tasks, 2, 3, 4, 5 and 6 have the following functionalities:Detailed descriptions of these tasks and what the output should be for each of them is given in the sections that follow. The remainder of this section explains what the output of your program should be when multiple task numbers are listed in the tasks_section.If task 1 is listed in the tasks_section, then task 1 should be executed. Remember that task 1 performs syntax error checking and semantic error checking. If the execution of task 1 results in an error, and task 1 is listed in the tasks_section, then your program should only output the error messages (as described below) and exits. If task 1 results in an error (syntax or semantic) no other tasks will be executed even if they are listed in the tasks_section. If task 1 is listed in the tasks_section and does not result in an error message, then task 1 produces no output. In that case, the outputs of the other tasks that are listed in tasks_section should be produced by the program. The order of these outputs should be according to the task numbers. So, first the output of task 2 is produced (if task 2 is listed in tasks_section), then the output of task 3 is produced (if task 3 is listed in tasks_section) and so on.If task 1 is not listed in the tasks_section, task 1 still needs to be executed. If task 1’s execution results in an error, then your program should output nothing in this case. If task 1 is not listed and task 1’s execution does not result in an error, then the outputs of the other tasks that are listed in tasks_section should be produced by the program. The order of these outputs should be according to the task numbers. So, first the output of task 2 is produced, then the output of task 3 is produced (if task 3 is listed in tasks_section) and so on.You should keep in mind that tasks are not necessarily listed in order in the tasks_section and they can even be repeated. For instance, we can have the following TASKS section: TASKS1 3 4 1 2 3In this example, some tasks are listed more than once. Later occurrences are ignored. So, the tasks_section above is equivalent to TASKS 1 2 3 4In the implementation guide, I explain a simple way to read the list and sort the task numbers using a boolean array.For task 1, your solution should detect syntax and semantic errors in the input program as specified in this section.If the input is not correct syntactically, your program should output SYNTAX ERROR !!!!!&%!!If there is syntax error, the output of your program should exactly match the output given above. No other output should be produced in this case, and your program should exit after producing the syntax error message. The provided parser.* skeleton files already have a function that produces the message above and exits the program.Semantic checking also checks for invalid input. Unlike syntax checking, semantic checking requires knowledge of the specific lexemes and does not simply look at the input as a sequence of tokens (token types). I start by explaining the rules for semantic checking. I also provide some examples to illustrate these rules.than once. The output in this case should be of the formSemantic Error Code 1: … where through are the numbers of each of the lines in which a duplicate polynomial_name appears in a polynomial header. The numbers should be sorted from smallest to largest. For example, if the input is (recall that line numbers are not part of the input and are just for reference):1:          TASKS2:                  1 3 43:          POLY4:                  F1 =5:                                 x^2 + 1;6:               F2 = x^2 + 1; 7:     F1 = x^2 + 1;8:                     F3 = x^2 + 1;9:                    G = x^2 + 1;10:                   F1 = x^2 + 1;11:                      G(X,Y) = X Y^2 + X Y;12: EXECUTE13:            INPUT Z; 14:          INPUT W;15:                  X = F1(Z);16:            Y = G(W); 17:        OUTPUT X;18:                OUTPUT Y;19: INPUTS20:                 1 2 3 18 19then the output should beSemantic Error Code 1: 7 10 11because on each of these lines the name of the polynomial in question has a duplicate declaration. Note that only the line numbers for the duplicates are listed. The line number for the first occurrence of a name is not listed.Semantic Error Code 2: … where through are the numbers of lines in which an invalid monomial name appears with one number printed per occurrence of an invalid monomial name. If there are multiple occurrences of an invalid monomial name on a line, the line number should be printed multiple times. The line numbers should be sorted from smallest to largest.Semantic Error Code 3: … where through are the numbers of each of the lines in which apolynomial_name appears in a polynomial_evaluation but for which there is no polynomial_declarationwith the same name. The line numbers should be listed from the smallest to the largest. For example ifthe input is:1:          TASKS2:                  1 3 43:          POLY4:               F1 = x^2 + 1; 5:     F2 = x^2 + 1; 6:              F3 = x^2 + 1; 7:     F4 = x^2 + 1; 8:              G1 = x^2 + 1; 9:    F5 = x^2 + 1;10:                     G2(X,Y) = X Y^2 + X Y;11: EXECUTE12:            INPUT Z; 13:          INPUT W;14:                  X = G(Z);15:                  Y = G2(Z,W);16:                  X = F(Z);17:                  Y = G2(Z,W);18: INPUTS19:                 1 2 3 18 19then the output should beSemantic Error Code 3: 14 16Because on line 14, there is an evaluation of polynomial G but there is no declaration for polynomial G and on line 16, there is an evaluation of polynomial F but there is no declaration of polynomial F.where through are the numbers of each of the lines in which polynomial_name appears in a polynomial_evaluation but the number of arguments in the polynomial evaluation is different from the number of parameters in the corresponding polynomial declaration. The line numbers should be listed from the smallest to the largest. For example if the input is:1:          TASKS2:                  1 3 43:          POLY4:                     F1 = x^2 + 1;5:               F2 = x^2 + 1; 6:     F3 = x^2 + 1; 7:              F4 = x^2 + 1; 8:     G1 = x^2 + 1; 9:              F5 = x^2 + 1;10:                     G2(X,Y) = X Y^2 + X Y;11: EXECUTE12:            INPUT Z; 13:          INPUT W;14:                    X = G2(X,Y, Z);15:                  Y = G2(Z,W);16:                  X = F1(Z);17:                   Y = F5(Z,Z);18:                   Y = F5(Z,Z,W);19: INPUTS20:                 1 2 3 18 19then the output should beSemantic Error Code 4: 14 17 18You can assume that an input program will have only one kind of semantic errors. So, for example, if a test case has Semantic Error Code 2, it will not have any other kind of semantic errors.For task 2, your program should output the results of all the polynomial evaluations in the propram. In this section I give a precise definition of the meaning of the input and the output that your compiler should generate. In a separate document that I will upload a little later, I will give an implementation guide that will help you plan your solution. You do not need to wait for the implementation guide to write the parser!The program uses names to refer to variables in the EXECUTE section. For each variable name, we associate a unique locations that will hold the value of the variable. This association between a variable name and its location is assumed to be implemented with a function location that takes a string as input and returns an integer value. We assume that there is a variable mem which is an array with each entry corresponding to one variable. All variables should be initialized to 0 (zero).To allocate mem entries to variables, you can have a simple table or map (which I will call the location table) that associates a variable name with a location. As your parser parses the input program, if it encounters a variable name in an input_statement, it needs to determine if this name has been previously encountered or not by looking it up in the location table. If the name is a new variable name, a new location needs to be associated with it, and the mapping from the variable name to the location needs to be added to the location table. To associate a location with a variable, you can simply keep a counter that tells you how many locations have been used (associated with variable names). Initially, the counter is 0. The first variable will have location 0 associated with it (will be stored in mem[0]), and the counter is incremented to become 1. The next variable will have location 1 associated with it (will be stored in mem[1]), and the counter is incremented to become 2 and so on.For example, if the input program is1:              TASKS2:                     1 23:              POLY4:                         F1 = x^2 + 1;5:                               F2(x,y,z) = x^2 + y + z + 1;6:                         F3(y) = y^2 + 1;7:                           F4(x,y) = x^2 + y^2;8:          G1 = x^2 + 1; 9:    F5 = x^2 + 1;10:                            G2(X,Y,Z,W) = X Y^2 + X Z + W + 1;11: EXECUTE12:       INPUT X; 13:          INPUT Z;14:                      Y = F1(Z);15:                      W = F2(X,Z,Z);16:                  OUTPUT W;17:                   OUTPUT Y;18:       INPUT X; 19:          INPUT Y; 20:       INPUT Z;21:                     Y = F3(X);22:                     W = F4(X,Y);23:                  OUTPUT W;24:                   OUTPUT Y;25:       INPUT X; 26:          INPUT Z; 27:       INPUT W;28:                      W = G2(X,Z,W,29:                                      Z);30: INPUTS31:                         1 2 3 18 19 22 33 12 11 16Then the locations of variables will beX 0 Z 1 Y 2 W 3We explain the semantics of the four kinds of statements in the program.Input statements get their input from the sequence of inputs. We refer to i’th value that appears in inputs as i’th input. The i’th input statement in the program of the form INPUT X is equivalent to:mem[location(“X”)] = i’th inputOutput statements have the form OUTPUT ID where the lexeme of the token ID is a variable name. This is the output variable of the output statement. Output statements print the values of their OUTPUT variables. If the output statement has the form OUTPUT X; , its effect is equivalent to:cout output_file.txt will read standard input from input_data.txt and produces standard output to output_file.txt.Now that we know how to use standard IO redirection, we are ready to test the program with test cases.For a given input to your program, there is an expected output which is the correct output that should be produced for the given input. So, a test case is represented by two files:The input is given in test_name.txt and the expected output is given in test_name.txt.expected.To test a program against a single test case, first we execute the program with the test input data:$ ./a.out < test_name.txt > program_output.txtWith this command, the output generated by the program will be stored in program_output.txt. To see if the program generated the correct expected output, we need to compare program_output.txt and test_name.txt.expected. We do that using the diff command which is a command to determine differences between two files:$ diff -Bw program_output.txt test_name.txt.expectedIf the two files are the same, there should be no difference between them. The options -Bw tell diff to ignore whitespace differences between the two files. If the files are the same (ignoring the whitespace differences), we should see no output from diff, otherwise, diff will produce a report showing the differences between the two files.We consider that the test passed if diff could not find any differences, otherwise we consider that the test failed.Our grading system uses this method to test your submissions against multiple test cases. In order to avoid having to type the commands shown above for running and comparing outputs for each test case manually, we provide you with a script that automates this process. The script name is test1.sh. test1.sh will make your life easier by allowing you to test your code against multiple test cases with one command.Here is how to use test1.sh to test your program:This will create a directory called testsThe output of the script should be self explanatory. To test your code after you make changes, you will just perform the last two steps (compile and run test1.sh).[1] Programs have access to another standard stream which is called standard error e.g. std::cerr in C++. Any such output is still displayed on the terminal screen. It is possible to redirect standard error to a file as well, but we will not discuss that here

$25.00 View