Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] Cs771 assignment 3

Assignment 31 Question 1 The first part of the solution essentially uses a linear model named Elastic Regression, which takes into account both the effects of Ridge Regression Penalty – L2 loss and Lasso Regression Penalty – L1 loss. The objective/loss function of the model is represented below: C(w) = argmin(1) w where, • w′ = Weights multiplied with each of the features present in the dataset taken into account for the construction of the regression model. • α = constant that multiplies both the lasso and ridge penalties. • λ1 = Regularization parameter for the Ridge regression penalty. • λ2 = Regularization parameter for the Lasso regression penalty. During the regularization procedure, the λ1 parameter leads to the formation of a sparse model, whereas the quadratic section of the penalty makes the part of λ1 portion more stable to the path of regularization, decreases the number of variables to be selected thereby promoting the grouping effect. The grouping effect enables the variables to be easily identified by correlation leading to the enhancement of the sampling procedure. This method first finds the Ridge Regression coefficients and then conducts the second step by using a Lasso sort of shrinkage of the coefficients. Inorder to determine the best performing linear model, Randomized Search Cross Validation technique has been applied for efficient hyperparameter selection out of the following sets of hyperparameters of the model. Here grid is a dictionary that contains the different hyperparameters of the Elastic Net Regression model as keys and its values. grid = dict() grid[’alpha’] = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 0.0, 0.2, 1.0] grid[’l1_ratio’] = np.arange(0,1, 0.01) grid[’selection’] = [’cyclic’, ’random’] grid[’max_iter’] = [2, 10, 100, 1000] grid[’tol’] = [1e-4, 1e-3, 1e-5, 1e-2, 1e-1, 0.0] The Randomized Search Cross Validation returns the best estimator with the following hyperparameters ElasticNet(alpha=0.1, l1_ratio=0.62, selection=’random’, tol=0.001, warm_start=True) Preprint. Under review. The corresponding MAE score on the training data are 5.6248 (for the O3 levels) and 6.5136 (for the NO2 levels) respectively. 2 Question 2 A suitable non-linear model which can effectively fit on the given dataset is K Nearest Neighbour algorithm. The KNN can be both used as a classification and regression model, which essentially classifies unknown data points based on the distance with respect to k nearest data points. The model essentially works on two different kinds of distance parameters, namely Euclidean distance, Minkowski distance and the Manhattan distance. One strength of the KNN regression algorithm that we would like to draw attention to at this point is its ability to work well with non-linear relationships. In the case of regression tasks, the algorithm chooses the k nearest points based on the given distance parameter. It calculates the new point to be approximately equal to the average of these nearest points. Also here Distance-weighted k-nearest-neighbor rule should be such where that it varies with the distance between the sample and the considered neighbor in such a manner that the value decreases with increasing sample-to-neighbor distance. (2) Here, the optimal value of K has been calculated by plotting the Train MAE loss and Test MAE loss for the case of both O3 and NO2 sensor values. The MAE loss variation for different k values has been shown below.Figure 1: MAE loss variation for K values We find that the KNN Regressor provides the minimum Mean absolute error for a k value of 5. model = KNeighborsRegressor(n_neighbors=5, P = 1, algorithm=”auto” , n_jobs = -1) Some other models tested for the given regression problem includes Xgboost, Multi-layer Perceptron, Random Forest Regressor models. • XGBoost Regressor : XGBoost uses a combination of two techniques, gradient boosting and tree boosting, to improve the accuracy of the model. Gradient boosting involves minimizing the loss function mean absolute error by iteratively adding weak learners (e.g., decision trees) to the model. Tree boosting, on the other hand, involves optimizing the split points of the decision tree by minimizing the loss function. • Random Forest Regressor : The algorithm works by creating a large number of decision trees, each of which is trained on a subset of the data and a subset of the features. The 2 final prediction is then made by averaging the predictions of all the trees in the forest. Random forest regression also includes several regularization techniques to prevent overfitting and improve the generalization of the model. These techniques include maximum depth, minimum samples per leaf, and maximum features. • Arttifical Neural Network : The given neural network consists of 64 followed by 32 and 8 layers. The optimizer used is Adam, and the kernel initialisation is He uniform and the loss function used is Mean absolute Error loss. The Perceptron is backpropagated for around 100 epochs on the training data with a batch size of 64. Non-linear Models O3 MAE loss NO2 MAE loss K Neighbours Regressor 3.3284 2.3228 XGboost Regressor 5.1433 4.7230 Random Forest Regressor 6.0784 5.3145 MultiLayer Perceptron 4.8807 4.9178 Table 1: Stats of different non-linear models 3 Question 3 The code is available in the submit.py file. Code snippet is mentioned below: import numpy as np import pickle as pkl # Define your prediction method here # df is a dataframe containing timestamps, weather data and potentials def my_predict( df ): X = np.array(df.iloc[:, 1:]) # Load your model file model = pkl.load(open(“knn_model.pkl”, “rb”)) test_preds = np.transpose(model.predict(X)) # Make two sets of predictions, one for O3 and pred_o3, pred_no2 = test_preds[0], test_preds[1] # Return both sets of predictions return ( pred_o3, pred_no2 ) another for NO2 3

$25.00 View

[SOLVED] Cs771 assignment 2

 1.1 TheoryFigure 1: Illustration of a working ID3 algorithm We have used the Iterative Dichotomizer decision tree algorithm which is considered one of the most robustly used algorithms for supervised machine learning classification tasks. The algorithm works by recursively partitioning the training data based on their attributes until the decision tree produces the purest splits (referred to as the leaves). ID3 uses a top-down greedy approach to build a decision tree. The tree is basically constructed from the top and the greedy approach means that at each iteration the best feature is selected to split the node. 1.2 Our approach The decision tree constructed to solve the given Wordle-Solvr problem consists of the following components: 1.2.1 process node • For the process node function, every non-root node present in the ID3 tree constructed is considered and passed through the get entropy function which calculates the index of the vocabulary that minimizes entropy the most or maximizes the Information Gain. This index can be used to access the most appropriate query that can be used at that step. Preprint. Under review. • All words from list all words list are extracted and reveal function helps to uncover the mask between the query and the required word which is stored in split dict as the key. The value of the split dict contains an array of indices that returns the given mask when queried. get entropy • The get entropy function iterates over all of the words present in that node (possible candidate queries) and passes the array to the function calc entropy which returns the entropy of the current node. • Another dictionary is maintained which given the mask, corresponds to a number of queries(which when queried with the word provides that particular mask). These values are used to calculate the sum of weighted entropy after splitting the node. • The difference of the weighted sum of entropies after splitting the node with the initialentropy calculated from the parent node gives us the information gain used to produce the best split out of all words. (1) where, V = possible values in Attribute A, S = set of examples X, Sv = subset where XA = V • Information gain indirectly describes the mutual information between the attribute and theclass of labels S. calc entropy • This function simply calculates the Shannon Entropy based on the formula where p represents the probability of each class label which can be produced by the corresponding split based on their attribute. E(p) = −p · log2(p) (2) 2 Solution 2 The entire algorithm has been implemented in the submit.py file. 2

$25.00 View

[SOLVED] Csc4005 project 1- embarrassingly parallel programming

This project weights 12.5% for your final grade (4 Projects for 50%) Prologue As the first programming project, students are required to solve embarrassingly parallel problem with six different parallel programming languages to get an intuitive understanding and hands-on experience on how the simplest parallel programming works. A very popular and representative application of embarrassingly parallel problem is image processing since the computation of each pixel is completely or nearly independent with each other. This programming project consists of two parts: Part-A: RGB to Grayscale Note: You do not need to modify the codes in this part. Just compile and execute them on the cluster to get the experiment results, and include that in your report. In this part, students are provided with ready-to-use source programs in a properly configured CMake project. Students need to download the source programs, compile them, and execute them on the cluster to get the experiment results. During the process, they need to have a brief understanding about how each parallel programming model is designed and implemented to do computation in parallel (for example, do computations on multiple data with one instruction, multiple processes with message passing in between, or multiple threads with shared memory). Problem Description What is RGB Image? RGB image can be viewed as three different images(a red scale image, a green scale image and a blue scale image) stacked on top of each other, and when fed into the red, green and blue inputs of a color monitor, it produces a color image on the screen. Reference: https://www.geeksforgeeks.org/matlab-rgb-image-representation/ What is Grayscale Image? A grayscale (or graylevel) image is simply one in which the only colors are shades of gray. The reason for differentiating such images from any other sort of color image is that less information needs to be provided for each pixel. In fact a `gray’ color is one in which the red, green and blue components all have equal intensity in RGB space, and so it is only necessary to specify a single intensity value for each pixel, as opposed to the three intensities needed to specify each pixel in a full color image. RGB to Grayscale as a Point Operation Transferring an image from RGB to grayscale belongs to point operation, which means a function is applied to every pixel in an image or in a selection. The key point is that the function operates only on the pixel’s current value, which makes it completely embarrassingly parallel. In this project, we use NTSC formula to be the function applied to the RGB image. math Gray = 0.299 * Red + 0.587 * Green + 0.114 * Blue Reference: https://support.ptc.com/help/mathcad/r9.0/en/index.html#page/PTC_Mathcad_Help/example_grayscale_and_color_in_images.html Example Lena RGB Lena Gray Convert Lena JPEG image (256×256) from RGB to GrayscaleConvert 4K JPEG image (3840×2599) from RGB to Grayscale Part-B: Image Filtering (Soften with Equal Weight Filter) Problem Description Image Filtering involves applying a function to every pixel in an image or selection but where the function utilizes not only the pixels current value but the value of neighboring pixels. Some of the filtering functions are listed below, and the famous convolutional kernel computation is also a kind of image filtering blur sharpen soften distort Two images below demostrate in detail how the image filtering is done. Basically, we have a filter matrix of given size (3 for example), and we slide that filter matrix across the image to compute the filtered value by element-wise multipling and summation.How to do image filtering with filter matrixAn example of image filtering of size 3 In this project, students are required to apply the simplest size-3 low-pass filter with equal weights to smooth the input JPEG image, which is shown below. Note that your program should also work for other filter matrices of size 3 with different weights, that means you should not do specific optimization on the 1 / 9 weight, like replacing multiplication with addition. 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 Reminders of Implementation 1. The pixels on the boundary of the image do not have all 8 neighbor pixels. For these pixels, you can either use padding (set value as 0 for those missed neighbors) or simply ignore them, which means you can handle the (width – 2) * (height – 2) inner image only. In this way, all the pixels should have all 8 neighbors. 2. Check the correctness of your program with the Lena RGB image. The 4K image has high resolution and the effect of smooth operation is hardly to tell. ExamplesLena RGBLena SmoothLena RGB Original and Smooth from left to right Benchmark Image The image used for performance evaluation is a 20K JPEG image with around 250 million pixels (19200 x 12995) retrieved by doing upper sampling on the 4K image, and the image has been uploaded to BlackBoard. Please download that image to your docker container or on the cluster. Do not use Lena or the 4K image to do the performance evaluation on your report, because the problem size is too small to get the parallel speedup. Requirements Six parallel programming implementations for PartB (60%) SIMD (10%) MPI (10%) Pthread (10%) OpenMP (10%) CUDA (10%) OpenACC (10%) As long as your programs can compile and execute to get the expected output image by the command you give in the report, you can get full mark. Performance of Your Program (30%) Try your best to do optimization on your parallel programs for higher speedup.If your programs shows similar performance to the sample solutions provided by the teaching stuff, then you can get full mark. Points will be deduted if your parallel programs perform poor while no justification can be found in the report. (Target Peformance will be released soon). Some hints to optimize your program are listed below: Try to avoid nested for loop, which often leads to bad parallelism. Change the way that image data or filter matrix are storred for more efficient memory access. Try to avoid expensive arithmetic operations (for example, double-precision floating point division is very expensive, and takes a few dozens of cycles to finish). Partition your data for computation in a proper way for balanced workload when doing parallelism. One Report in PDF (10%, No Page Limit) The report does not have to be very long and beautiful to help you get good grade, but you need to include what you have done and what you have learned in this project. The following components should be included in the report: How to compile and execute your program to get the expected output on the cluster. Briefly explain how does each parallel programming model do computation in parallel? What are the similarities and differences between them. Explain these with what you have learned from the lectures (like different types of parallelism, ILP, DLP, TLP, etc). What kinds of optimizations have you tried to speed up your parallel program for PartB, and how does them work? Show the experiment results you get for both PartA and PartB, and do some numerical analysis, such as calculating the speedup and efficiency, demonstrated with tables and figures. A combination of multiple parallel programming models, like combining MPI and OpenMP together. Try to bind program to a specific CPU core for better performance. Refer to: https://slurm.schedmd.com/mc_support.html For SIMD, maybe you can have a try with different ISA (Instruction Set Architecture) to do ILP (Instruction-Level-Parallelism). The Extra Credit Policy How to execute the sample programs in PartA? Dependency Installation Libjpeg Libjpeg is a tool that we use to manipulate JPEG images. You need to install its packages with yum in your docker container instead of your host OS (Windows or MacOS). This package has been installed on the cluster, so feel free to use it there. “`bash Check the ligjpeg packages that are going to be installed yum list libjpeg* Install libjpeg-turbo-devel.x86_64 with yum yum install libjpeg-turbo-devel.x86_64 -y Check that you have installed libjpeg packages correctly yum list libjpeg* “` The terminal output for yum list libjpeg* after the installation should be as follows: bash [root@cf49d1025aff bin]# yum list libjpeg* Loaded plugins: fastestmirror, ovl Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * epel: ftp.riken.jp * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com Installed Packages libjpeg-turbo.x86_64 1.2.90-8.el7 @base libjpeg-turbo-devel.x86_64 1.2.90-8.el7 @base Available Packages libjpeg-turbo.i686 1.2.90-8.el7 base libjpeg-turbo-devel.i686 1.2.908.el7 base libjpeg-turbo-static.i686 1.2.90-8.el7 base libjpeg-turbo-static.x86_64 1.2.90-8.el7 base libjpegturbo-utils.x86_64 1.2.90-8.el7 base Upgrade to CMake3 and GCC-7 (In docker container only) The programs need cmake3 and gcc-7 for compilation and execution. These upgrades have been done on the cluster, which means you can compile and execute the programs directly on the cluster with no problem. If you need to develop programs on your docker container, like for PartB implementation, you need to upgrade your cmake and gcc by yourself. “`bash Install cmake3 with yum yum install cmake3 -y cmake –version # output should be 3.17.5 Install gcc/g++-7 with yum yum install -y centos-release-scl yum install -y devtoolset-7-gcc scl -l scl enable devtoolset-7 bash gcc -v # output should be 7.3.1 “` How to compile the programs? “`bash cd /path/to/project1 mkdir build && cd build Change to -DCMAKE_BUILD_TYPE=Debug for debug build error message logging Here, use cmake on the cluster and cmake3 in your docker container cmake .. make -j4 “` How to execute the programs? In Your Docker Container “`bash cd /path/to/project1/build Sequential ./src/cpu/sequential_PartA /path/to/input.jpg /path/to/output.jpg MPI mpirun -np {Num of Processes} ./src/cpu/mpi_PartA /path/to/input.jpg /path/to/output.jpg Pthread ./src/cpu/pthread_PartA /path/to/input.jpg /path/to/output.jpg {Num of Threads} OpenMP ./src/cpu/openmp_PartA /path/to/input.jpg /path/to/output.jpg CUDA ./src/gpu/cuda_PartA /path/to/input.jpg /path/to/output.jpg OpenACC ./src/gpu/openacc_PartA /path/to/input.jpg /path/to/output.jpg “` On the Cluster Important: Change the directory of output file in sbatch.sh first “`bash Use sbatch cd /path/to/project1 sbatch ./src/scripts/sbatch_PartA.sh “` Performance Evaluation PartA: RGB to Grayscale Experiment Setup On the cluster, allocated with 32 cores Experiment on a 20K JPEG image (19200 x 12995 = 250 million pixels) sbatch file for PartA Performance measured as execution time in milliseconds | Number of Processes / Cores | Sequential | SIMD (AVX2) | MPI | Pthread | OpenMP | CUDA | OpenACC | |—————————–|———–|————-|—–|———|——–|——|———| | 1 | 632 | 416 | 665 | 704 | 475 | 27 | 28 | | 2 | N/A | N/A | 767 | 638 | 471 | N/A | N/A | | 4 | N/A | N/A | 490 | 358 | 448 | N/A | N/A | | 8 | N/A | N/A | 361 | 178 | 288 | N/A | N/A | | 16 | N/A | N/A | 288 | 116 | 158 | N/A | N/A | | 32 | N/A | N/A | 257 | 62 | 126 | N/A | N/A |Performance Evaluation of PartA (numbers refer to execution time in milliseconds) Appendix Appendix A: GCC Optimization Options You can list all the supported optimization options for gcc either by terminal or through online documentations “`bash Execute on your docker container or on the cluster gcc –help=optimizers “` Online documentation: Options That Control Optimization for gcc-7.3 You can find a lot of useful options to let gcc compiler to do optimization for your program, like doing tree vectorization. -ftree-vectorize Perform vectorization on trees. This flag enables -ftree-loop-vectorize and -ftree-slp-vectorize if not explicitly specified. -ftree-loop-vectorize Perform loop vectorization on trees. This flag is enabled by default at -O3 and when -ftree-vectorize is enabled. -ftree-slp-vectorize Perform basic block vectorization on trees. This flag is enabled by default at -O3 and when -ftree-vectorize is enabled. Appendix B: Tutorials of the Six Parallel Programing Languages SIMD https://users.ece.cmu.edu/~franzf/teaching/slides-18-645-simd.pdf MPI https://mpitutorial.com/tutorials/ OpenMP https://engineering.purdue.edu/~smidkiff/ece563/files/ECE563OpenMPTutorial.pdf Pthread https://www.cs.cmu.edu/afs/cs/academic/class/15492-f07/www/pthreads.html https://hpc-tutorials.llnl.gov/posix/ CUDA https://newfrontiers.illinois.edu/news-and-events/introduction-to-parallel-programming-with-cuda/ https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html OpenACC https://ulhpc-tutorials.readthedocs.io/en/latest/gpu/openacc/basics/ https://www.openacc.org/sites/default/files/inline-files/OpenACC_Programming_Guide_0_0.pdf

$25.00 View

[SOLVED] Comp9021 coding quiz 1

Coding Quiz 1Description You are provided with a stub in which you need to insert your code where indicated without doing any changes to the existing code to complete the task. The current code will generate a mapping (that is, a dictionary) based on a seed and an upper bound values provided by the user. Your task is to process the list of cycles based on the generated mapping and the reversed dictionary as described below.MarkingMake sure not to change the filename quiz_1.py while submitting by clicking on [Mark] button in Ed. It is your responsibility to check that your submission did go through properly using Submissions link in Ed otherwise your mark will be zero for Quiz 1.Test Cases$ python quiz_1.pyEnter two integers: 0 4The generated mapping is: {2: 3, 4: 1}The keys are, from smallest to largest: [2, 4]Properly ordered, the cycles given by the mapping are: []The (triply ordered) reversed dictionary per lengths is: {1: {1: [4], 3: [2]}}$ python quiz_1.pyEnter two integers: 0 6The generated mapping is: {1: 1, 3: 3, 5: 6, 6: 6}The keys are, from smallest to largest: [1, 3, 5, 6]Properly ordered, the cycles given by the mapping are: [[1], [3], [6]]The (triply ordered) reversed dictionary per lengths is: {1: {1: [1], 3: [3]}, 2: {6: [5, 6]}}$ python quiz_1.pyEnter two integers: 0 11The generated mapping is: {2: 7, 3: 11, 4: 10, 5: 10, 7: 2, 9: 5, 10: 10, 11: 5}The keys are, from smallest to largest: [2, 3, 4, 5, 7, 9, 10, 11]Properly ordered, the cycles given by the mapping are: [[2, 7], [10]]The (triply ordered) reversed dictionary per lengths is: {1: {2: [7], 7: [2], 11: [3]}, 2: {5: [9, 11]}, 3: {10: [4, 5, 10]}} $ python quiz_1.pyEnter two integers: 10 9The generated mapping is: {1: 5, 2: 6, 3: 5, 4: 5, 5: 6, 6: 7, 7: 1, 9: 6}The keys are, from smallest to largest: [1, 2, 3, 4, 5, 6, 7, 9]Properly ordered, the cycles given by the mapping are: [[1, 5, 6, 7]]The (triply ordered) reversed dictionary per lengths is: {1: {1: [7], 7: [6]}, 3: {5: [1, 3, 4], 6: [2, 5, 9]}}$ python quiz_1.pyEnter two integers: 20 11 The generated mapping is: {2: 4, 3: 9, 4: 4, 5: 8, 6: 2, 7: 5, 8: 11, 9: 1, 10: 10, 11: 5} The keys are, from smallest to largest: [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]Properly ordered, the cycles given by the mapping are: [[4], [5, 8, 11], [10]]The (triply ordered) reversed dictionary per lengths is: {1: {1: [9], 2: [6], 8: [5], 9: [3], 10: [10], 11: [8]}, 2: {4: [2, 4], 5: [7, 11]}}$ python quiz_1.pyEnter two integers: 50 15 The generated mapping is: {1: 5, 2: 14, 3: 15, 4: 3, 5: 5, 6: 5, 7: 15, 8: 6, 9: 10, 10: 15, 11: 12, 12: 15, 13: 14, 14: 8, 15: 9}The keys are, from smallest to largest: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]Properly ordered, the cycles given by the mapping are: [[5], [9, 10, 15]]The (triply ordered) reversed dictionary per lengths is: {1: {3: [4], 6: [8], 8: [14], 9: [15], 10: [9], 12: [11]}, 2: {14: [2, 13]}, 3: {5: [1, 5, 6]}, 4: {15: [3, 7, 10, 12]}}$ python quiz_1.pyEnter two integers: 12 38 The generated mapping is: {1: 11, 2: 13, 3: 38, 4: 38, 5: 6, 6: 36, 7: 9, 8: 37, 9: 4, 10: 9, 11: 36, 12: 6, 13: 3, 15: 29, 16: 8, 17: 13, 19: 22, 20: 3, 21: 38, 22: 33, 24: 12, 25: 4, 27: 11, 28: 23, 29: 22, 30: 3, 31: 11, 32: 17, 33: 9, 34: 26, 35: 30, 36: 31, 37: 22, 38: 37}The keys are, from smallest to largest: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 15, 16, 17, 19, 20, 21, 22, 24, 25, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]Properly ordered, the cycles given by the mapping are: [[4, 38, 37, 22, 33, 9], [11, 36, 31]]The (triply ordered) reversed dictionary per lengths is: {1: {8: [16], 12: [24], 17: [32], 23: [28], 26: [34], 29: [15], 30: [35], 31: [36], 33: [22]}, 2: {4: [9, 25], 6: [5, 12], 13: [2, 17], 36: [6, 11], 37: [8, 38]}, 3: {3: [13, 20, 30], 9: [7, 10, 33], 11: [1, 27, 31], 22: [19, 29, 37], 38: [3, 4, 21]}}$ python quiz_1.pyEnter two integers: 34 56 The generated mapping is: {1: 34, 2: 8, 3: 35, 4: 11, 5: 28, 6: 47, 7: 24, 9: 27, 10: 38, 11: 4, 12: 38, 15: 4, 16: 55, 17: 39, 19: 35, 20: 55, 23: 22, 24: 33, 25: 2, 26: 12, 27: 35, 28: 13, 29: 1, 30: 53, 31: 38, 32: 2, 33: 29, 34: 12, 35: 1, 36: 8, 37: 48, 38: 55, 39: 33, 40: 42, 41: 41, 43: 25, 44: 50, 45: 56, 47: 6, 48: 35, 49: 5 2, 50: 4, 51: 1, 52: 40, 53: 43, 54: 17, 55: 48, 56: 41}The keys are, from smallest to largest: [1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 15, 16, 17, 19, 20, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56]Properly ordered, the cycles given by the mapping are: [[1, 34, 12, 38, 55, 48, 35], [4, 11], [6, 47], [41]]The (triply ordered) reversed dictionary per lengths is: {1: {6: [47], 11: [4], 13: [28], 17: [54], 22: [23], 24: [7], 25: [43], 27: [9], 28: [5], 29: [33], 34: [1], 39: [17], 40: [52], 42: [40], 43: [53], 47: [6], 50: [44], 52: [49], 53: [30], 56: [45]}, 2: {2: [25, 32], 8: [2, 36], 12: [26, 34], 33: [24, 39], 41: [41, 56], 48: [37, 55]}, 3: {1: [29, 35, 51], 4: [11, 15, 50], 38: [10, 12, 31], 55: [16, 20, 38]}, 4: {35: [3, 19, 27, 48]}}Hints(1) The cyclesA cycle is a path that starts and ends with the same key. Similarly, a path of length at least 1 in which no key appears more than once, except the first key is the same as the last key, is called a cycle.A cycle is a list of keys [k1, k2, k3, …, kn] where the first key k1 of the list is the value of the last key kn, that is, the following key:value elements must exist in the mapping (or dictionary): k1: k2, k2: k3, …, kn-1: kn, and kn: k1For instance, in the example with 10 9 as input, there is one cycle: [1, 5, 6, 7] since the following key: value elements are in the mapping (or dictionary): {1: 5}, {5: 6}, {6: 7}, and {7: 1}Make sure when recording the cycle do not repeat the first key at the end, that is, for the following cycle:1 5 6 7 1 {1:5} {5:6} {6:7} {7:1}It should be recorded as [1, 5, 6, 7] not [1, 5, 6, 7, 1]Please also note that the keys in the cycle are not necessarily ordered. The only requirement is that the first elements of the cycles are in order (and not the elements within the cycle) as shown in the example with 12 38 as input: [[4, 38, 37, 22, 33, 9], [11, 36, 31]] The two cycles above are not ordered. However, looking at the first elements of the cycles only, the two cycles are ordered since 4 is smaller than 11.(2) The (triply ordered) reversed dictionary per lengthsFor instance, in the example with 0 4 as input: The generated mapping is: {2: 3, 4: 1}The (triply ordered) reversed dictionary per lengths is: {1:[4], 3:[2]}  first generate the reversed dictionary {1: {1: [4], 3: [2]}}  final resultIn the example with 0 6 as input: The generated mapping is: {1: 1, 3: 3, 5: 6, 6: 6}The (triply ordered) reversed dictionary per lengths is: {1:[1], 3:[3], 6:[5,6]}  first generate the reversed dictionary {1: {1: [1], 3: [3]}, 2: {6: [5, 6]}}  final resultTriply ordered because there are three levels of sorting: • level 1: per length which is the key of the outer dictionary • level 2: per original value which is the key of the inner dictionary • level 3: the values of the inner dictionary which are lists are sorted

$25.00 View

[SOLVED] Cit 593 module 10 assignment – c strings 2025

In lecture you have learned that in C an array of characters with a NULL termination is considered a String, whereas an array of characters without a NULL termination is simply an array of characters. The standard library is available with most C compilers that includes several functions designed to work with and manipulate Strings in C. Strings are so common in C that knowing how to use these functions in C is considered a basic skill.The goal of this assignment is to give you experience with strings in C and with the library of functions designed to work with strings, give you an appreciation of how those functions work, and also continue to help you work with pointers and arrays (in the context of C Strings).     We have provided a basic framework and several function definitions that you must implement.This file contains the function declarations you must implement.  Aside from adding the required declarations for my_strrev, my_strccase, and optionally my_strtok, do not modify this file. This file contains empty implementations for the functions defined in my_string.h.  We have provided the implementations of my_strlen using array notation and pointer arithmetic for you.  You will provide the remaining implementations in this file. This is a test environment program only.  We will not review it or even look at it and it will not be used for grading.  You are free to write any code necessary to test your implementations.   In lecture, we discussed a commonly used function, strlen, that is part of the string.h library. Its job is to take in a C String, count the number of characters up to (but not including) the NULL character and return the string’s length. As an example, if our string waschar my_string [100] = “Tom” ;strlen(my_string) would return 3.Even though there are 100 bytes allocated on the stack for the string, since there are only 3 characters (followed by a NULL), the length of the string is indeed 3.In lecture, we presented two versions of a strlen-like function. One function uses array notation and the other uses pointer notation. Ultimately they perform the same operation.my_strlen_array treats the incoming argument (char* string) as if it is an array using array notation (i.e. with square brackets [ and ])size_t my_strlen_array(const char *str)int len = 0 ;while (str[len] != ‘’) {len++ ;}return (len) ;} my_strlen_pointer treats the incoming argument as the pointer it truly is, using pointer arithmetic to determine the string’s length.size_t my_strlen_pointer(const char *str)const char* s;for (s = str; *s; ++s) ;return (s – str);}Note: size_t is not an actual C type; is a “typedef’ed”, that is, it is a shortcut for unsigned long. Your task for this assignment is to implement your own library of string functions to mimic the standard C library string functions.In Codio, we have provided a header file called my_string.h. In that header file, we have declared several functions: my_strlen_array, my_strcpy_pointer, etc.In my_string.c, we have implemented only two of the many functions: my_strlen_array and my_strlen_pointer, as described above.  You will implement the remaining functions.In a third file: program1.c, we have provided some basic code that calls the functions in your my_string library and compares the output of them to functions in the standard C-library string.h. This is one way to quickly check if your output is correct.  Look carefully at these three files before continuing. For this problem, your task is to implement your own library of string functions to mimic the standard C library string functions.           int main (int argc, char** argv) ;int main (int argc, char** argv) {printf (“# of arguments passed: %d ”, argc) ;for (int i=0; i< argc ; i++) {printf ( “argv[%d] = %s ”, i, argv[i] ) ;}return (0) ;}./program3 arg1 2 arg3 4 arg5            There is a single “submission check” test that runs once you upload your code to Gradescope.  This test checks that you have submitted all required files and also that your program compiles and any autograder code compiles successfully.  It does not run your program or provide any input on whether it works or not.  This check just ensures that all the required components exist.  This test is performed after uploading to Gradescope.Ensure that you are passing this check before closing Gradescope.  If you are not passing this check, please reach out to TAs for troubleshooting assistance.The autograder will also show the results of six tests:The remaining tests will be hidden until after grades are published. You will submit this assignment to Gradescope in the assignment entitled Assignment 10: Strings in C.Download the required .c source and .h header files (as well as any additional helper files required) and your Makefile from Codio to your computer, then Upload all of these files to the Gradescope assignment.  We expect my_string.c, my_string.h, program3.c, and makefile.  Do not submit program1.c, program2.c., or program4.c.Do not not submit intermediate files (anything .o).You have unlimited submissions until the deadline, after which late penalties apply as noted in the syllabus.We will only grade the last submission uploaded.Do not mark your Codio workspace complete.  Only the submission in Gradescope will be used for grading purposes.There is no page matching and no academic integrity submission for autograder assignments.  This assignment is worth 127.5 points, normalized to 100% for gradebook purposes.Problem 1 (standard functions) is worth 72 points (each function is 9 points, with equally weighted sub-tests per function).Problem 2 (custom functions) is worth 18 points (each function is 9 points, with equally weighted sub-tests per function).Problem 3 (parsing strings) is worth 10 points. Problems 1, 2, and the Extra Credit are tested with Unit Testing.  We will run different scenarios for each function to validate the functionality (partial credit based on which tests fail).Problem 3 checks the final output produced by your program and compares that output to the expected output.  It must match exactly for credit: double check that your program does not have any extra output.We will only grade the last submission, regardless of the results of any previous submission.We will not be providing partial credit for autograder tests.You may ask for feedback by submitting a regrade request using the Miscellaneous Adjustments rubric item. The Extra Credit is worth 6 percentage points so the highest grade on the assignment is 106%.Your extra credit must not break functionality for the non-extra credit requirements.There is no partial credit.  It must work completely for any credit.We will not give guidance on how to do this since it is designed to be challenge problem.  Hints from previous semesters  strlen referencehttps://www.tutorialspoint.com/c_standard_library/c_function_strlen.htm strcpy referencehttps://www.tutorialspoint.com/c_standard_library/c_function_strcpy.htm strchr referencehttps://www.tutorialspoint.com/c_standard_library/c_function_strchr.htm strcat referencehttps://www.tutorialspoint.com/c_standard_library/c_function_strcat.htm strcmp referencehttps://www.tutorialspoint.com/c_standard_library/c_function_strcmp.htm sscanf referencehttps://www.tutorialspoint.com/c_standard_library/c_function_sscanf.htm sprintf referencehttps://www.tutorialspoint.com/c_standard_library/c_function_sprintf.htm strtok referencehttps://www.tutorialspoint.com/c_standard_library/c_function_strtok.htm strtok reference (Linux manual pages)https://man7.org/linux/man-pages/man3/strtok_r.3.html The const modifierhttp://www.geeksforgeeks.org/const-qualifier-in-c/   

$25.00 View

[SOLVED] Cs7638 ai for robotics – indiana drones project (slam) spring 2025

Hello Indiana Drones! We unconvered the location of an invaluable piece of ancient treasure – the likes of which we have never seen before. Unfortunately, the treasure is located in a dense and dangerous jungle making a typical safari impossible. That’s where you come in!As a drone navigation and extraction expert, your mission should you choose to accept it, is to :Part A SLAM (worth 60 points) : Estimate The Locations Of Trees In The Jungle EnvironmentAnd The Drone Given Pre-Scripted Movements and MeasurementsComplete the SLAM class in the indiana_drones.py file.To test your SLAM module, testing_suite_indiana_drones.py initiates a drone at (0,0) in a jungle with a lot of trees for each test case. The location, size and number of the trees are intially unknown to you. The drone moves through the jungle environment in a series of pre-scripted movements. At each time step, the drone’s sensors report measurements that are passed through your process_measurements function and the makes movements that are passed through your process_movement function. The goal of these functions is to update your belief of the locations of the drone and trees in your environment given the measurement and movement inputs. Those estimates will be read using your get_coordinates function and compared against the ground truth.The drone’s sensors report the distance (m), bearing (rad) and size (m) of trees (within the sensor’s horizon) relative to the drone’s location and orientation (See Figure 1 – Measurement). Note : since you only see trees within the sensor’s horizon, trees may appear and disappear in your measurements as you move through the environment and get closer/further to previously unseen/seen trees. The drone’s controller turns the drone by the pre-scripted steering angle (rad) followed by a movement in a straight line by the pre-scripted distance (m) (See Figure 1 – Movement). Both the measurement and movement have gaussian noise in their distance and bearing/steering.In each test case, 30 points is for accurately estimating (within a 0.25 meter radius) the position of your drone and 30 points is for accurately estimating (within a 0.25 meter radius) the location of each of the trees. Points are deducted for each inaccuracy.Figure 1: Drone DiagramPart B Navigation (worth 40 points) : Navigate To The Treasure While Avoiding Trees In Your Path and Extract ItComplete the IndianaDronesPlanner class in the indiana_drones.py fileTo test your SLAM module, the testing_suite_indiana_drones.py initiates a drone at (0,0) in a jungle with a lot of trees for each test case. The location, size and number of the trees are initially unknown to you. There is a piece of treasure in the environment whose location is known to you. The goal of your planner should be to move towards the treasure and extract it while avoiding crashes with trees on its way.At each time step, the drone’s sensors report their measurements and this is provided as input to the next_move function along with the location of the drone. The output of the function is used to move the drone through the jungle environment/extract the treasure. The output of your navigation algorithm can be one of two actions in the next_move function: namely move and extract. The move action moves the drone by the steering angle and distance you prescribe. Your drone has a maximum turning angle [in radians] and a maximum distance [in meters] that it can move each timestep [both passed using a parameter]. Movement commands that exceed these values will be ignored and cause the drone to not move. The extract action extracts the treasure at your location. The treasure will only be extracted if it is within the defined radius (0.25 meters). If not there will be a time penalty for extracting dirt.You should specify the movement as follows: move 1 1.57. [command distance steering] which means the drone will turn counterclock-wise 90 degrees [1.57 radians] first and then move a distance of 1 meter. When you issue your extract action you should supply 3 arguments total, including the treasure type (*) and current estimated location (x, y) of the drone as follows: extract * 1.5 -2.1 [command treasure_type x y].Whenever the drone enters within a particular radius of a tree’s center (i.e. the canopy of a tree), it is deemed to have crashed. In this project we assume the drone is a point (even though in the visualization it occupies some area). Note : The drone moves on a straight line path, so even if the starting and ending points of your movement aren’t inside the tree’s canopy, the path could still intersect the tree, which would result in a penalty. The line_circle_intersect function in testing_suite_indiana_drones.py may be helpful.40 points is for extracting the treasure within the time limit, of which 10 points is deducted for each tree crash (upto a maximum of 20 points). For example, if the drone extracted the treasure within the time limit but crashed into one tree and one tree only, you will receive 30 points.We are using the Gradescope autograder system which allows you to upload and grade your assignment with a remote / online autograder. You must submit your indiana_drones.py file (only) to Gradescope to receive credit. Do not archive (zip,tar,etc) it. Your code must be valid python code, and you may use external modules.We encourage you to keep any testing code in a separate file that you do not submit. Your code should also NOT display a GUI or Visualization when we import or call your function under test.We have provided a testing suite and test cases similar to the one we’ll be using for grading the project, which you can use to help ensure your code is working correctly. These testing suites are NOT complete, and you will need to develop other, more complicated, test cases to fully validate your code. We also recommend making your own simple/trivial test cases to unit test your algorithm as you code it. We encourage you to share your test cases (only) with other students on Ed Discussions.The testing_suite_indiana_drones.py will run all cases from both part A and part B 10 times, remove the lowest score for each and average the rest to calculate your score. It will do this automatically (i.e. you do not need to loop your code). Since the score is stochastic and may have small variations, feel free to run your code on Gradescope multiple times.Ensure that your code consistently succeeds on each of the given test cases as well as on a wide range of other test cases of your own design. For each test case, your code must complete execution within the prescribed time limit (10 seconds) or it will receive no credit. Note that the grading machine is relatively low powered, so you may want to set your local time limit to 5 seconds to ensure that you don’t go past the CPU limit. Note that if VERBOSE is on in the testing_suite_indiana_drones.py file, printing will take a lot of time and slow down your execution. So please feel free to increase the time limit while debugging with the VERBOSE on, but when you submit your code, it should run within the 10 second time limit on Gradescope.Usage: python testing_suite_indiana_drones.pyA visualization file has been provided to aid in debugging. The visualization will plot 6 pieces of data: the real location of drone, the estimated location of drone, the real location of trees, the estimated location of trees, the types of the trees (‘A’, ‘B’, …etc) and the location of treasure present in the environment. The real location of the drone will be a drone with 4 rotors. The estimated location of the drone will be a small blue dot. The real location of trees will be represented by circles of varying radii. The trees that are visible to the drone’s sensors are green in color and the trees that are too far away for the sensor to detect are in gray. The estimated location of a tree will be a small black dot. The type of tree/treasure will be next to the real location. The treasure is represented by a red triangle.The estimated points to plot need to be returned from next_move as a 2nd (optional) value in the form of a dictionary. This is needed to show your SLAM system’s estimates of drone and landmark location in the visualization. The keys should be the landmark id and the values should be its x,y coordinates. The key representing the drone’s estimated location will be ‘self’. {‘self’: (.2, 1.5), landmark id 1:(.4,1.9)}Usage: python visualize.py [-h] [–part {A,B}] [–case {1,2,3,4,5}] Example to run the visualization: python visualize.py –part B –case 3The visualize.py and testing_suite_indiana_drones.py have a VERBOSE FLAG. If the FLAG is True, it will print helpful outputs in the terminal for debugging. In addition, there is a NOISE_FLAG in the testing_suite_indiana_drones.py. Ensure that your code works with no noise first before you test against anoisy environment. Q: I’m confused. We are given so many files. What exactly should we do again and in which file? A: The main file you are concerned with is indiana_drones.py. This is what you fill and submit to gradescope. It contains two classes (SLAM and IndianaDronesPlanner) whose methods are used by the testing_suite_indiana_drone.py to run SLAM and Navigation respectively in various test cases to generate your score. drone.py contains helper classes and methods that you are free to use in your implementation. visualize.py is provided to help you debug your code with a visualization.Q: Where does the drone start? Which way is it facing? A: Although the drone starts in different places in different test cases, you can assume that it starts at (0,0) for each test case and report your outputs accordinly. Your drone will always have a bearing of zero degrees when it starts (i.e. facing east).Q: What are the (x,y) return values from get_coordinates function relative to? A: They should return your best guess for the position of the drone and trees relative to the drone’s starting location (0,0).Q: How can I uniquely identify trees in the environment? A: Each tree will have a unique landmark id. Although there may be more than one of the same type of tree in the area, each will have a unique id.

$25.00 View

[SOLVED] Ecse444 lab 2- gpio and dac

Abstract In this lab you will (a) learn how to take and react to a digital GPIO input, in the form of a button press, (b) generate output for a digital to analog converter, (c) how to use the debugging interface to plot variables as they change with time, and (d) how to take input from an analog to digital converter. Deliverables for demonstration ● C implementation of LED lighting on button press. ● C implementation of triangle, saw, and sine signals. ● C implementation scaling ADC outputs to calculate CPU temperature in degree C. ● C implementation of temperature-dependent output to the speaker. ● C implementation of a final application that integrates the above deliverables. Grading ● LED lighting on button press ○ 10% ● C implementation of signals ○ 10% triangle ○ 10% saw ○ 10% sine ● C implementation scaling ADC outputs to calculate CPU temperature in degrees C ○ 20% ● C implementation of temperature-dependent output to the speaker ○ 20% ● C implementation of final application ○ 20% Changelog Overview In this lab, we’ll take a more in-depth look at GPIO, and introduce both digital to analog conversion and analog conversion. Unlike the previous two labs, this time you won’t be walked through each step of the process, but instead be directed to reference material. First, we’ll use an on-board button to control an LED. Second, we’ll configure and drive an on-chip digital to analog converter (DAC) with a periodic signal using trigonometric functions available from CMSIS-DSP. Then, we’ll configure ADCs to read the internal reference voltage used for analog to digital conversion, and the internal temperature sensor, and subsequently use these values to determine the temperature of the processor in degrees Celsius. Finally, we’ll use the push-button to control an application that can switch between two modes: one where a fixed output is sent to the speaker, and another where the output depends on CPU temperature. Resources ARM Cortex-M4 Programming Manual B-L475E-IOT01A User Manual / B-L4S5I-IOT01A User Manual HAL Driver User Manual STM32L475VG Datasheet / STM32L4S5VI Datasheet STM32L47xxx Reference Manual / STM32L4+ Reference Manual Part 1: GPIO in Digital Mode, with Buttons and LEDs Configuring the Board Initialization As before, start a new project, reviewing instructions in Lab 0 and Lab 1 if necessary. These basic steps (as well as some additional ones) will be repeated at the beginning of each of the rest of the labs. 1. Check the clock configuration to ensure that HCLK is 80 MHz. (See Lab 0.) 2. Clear the pinout. (See Lab 0.) 3. Set up PB3 and PA13 to support use of the ITM for debugging. (See Lab 1.) LED and Push-button Configuration For the first part of this lab, we’ll use the push-button and LED. Start by configuring the LED. (See Lab 0.) Next, to configure the button we need to first determine which pin is associated with the blue button on the development board; the black button is always configured to reset the processor. Looking at the table of contents of the B-L475E-IOT01A User Manual, observe that LEDs and buttons are covered in Section 7.14. There, in Table 2, we observe that the blue button is referred to as B2; however, unlike for the LEDs, no pinout information is provided. (For the L4S5I variant, this information is in Section 6.12, Table 4 of the manual.) Referring once again to the table of contents, observe that schematics are available in Appendix B. At the beginning of the appendix, we observe that peripherals are covered in Figure 31. Figure 31 indicates the name of the signal that the blue button is ultimately connected to, BUTTON_EXTI13. (Interestingly, there is no such schematic for the L4S5I variant, but the signal name is the same. How would you figure this out on your own? Who knows?!) Finally, referring to Appendix A (I/O assignment), we can search Table 11 for this signal (Signal or Label), and thereby determine the pin associated with this signal (Pin Name), as well as the option to select when configuring this pin (Feature / Comment). Once you’ve identified the appropriate pin, configure it. As in Lab 0, it is recommended that you add labels for the pins used for the LED and push-button, as this makes it easier to refer to them in your source code. Lighting the LED when the Button is Pressed Now, write code that turns on the LED when the button is pressed. The simplest code to implement this is a while loop that continuously checks (or polls) the status of the button. If the button is pressed, the LED is set to on. Otherwise, the LED is set to off. Section 31.2.4 of the HAL Driver User Manual lists the functions that you will need. Note: recall that if you’ve labeled pins, you can use these names rather than those detailed in the specifications for these functions. You can confirm the names of the pins, and see the code generated to set them up, by inspecting the function MX_GPIO_Init(…) in main.c. STM32L47xxx Reference Manual (same chapter for the L4S5I variant) for more information. Part 2: GPIO in Analog Mode, with DAC Configuring the Board The second part of this lab requires that we configure a few more GPIO pins, this time for analog output. DAC converts digital register values (i.e., integers) into analog values (i.e., voltages), e.g., to drive a speaker with an oscillating signal. We’re going to drive two different signals on two different DAC output channels: a saw wave and a triangle wave, with as similar a frequency as possible. There are two basic paths to discovering which pins must be configured for the on-board DAC. The first is through manuals. Unlike for the push-button, the signals we’re looking for don’t appear in the figures of Appendix A. This time, we need Chapter 4 of the STM32L47xxx Datasheet, which describes the pinout of the chip. (This is also found in Chapter 4 for the STM32L4Sxx Datasheet.) Your chip is using the LQFP100 package. Looking through this manual, we observe that the chip has a single DAC with two channels, DAC1_OUT1 and DAC1_OUT2. Table 16, starting on page 60 (Table 15, page 77), lists all of the pins for the device, and therefore, those which correspond to these two outputs. Select each of these pins (according to their Pin name) in STM32CubeIDE, and choose DAC1_OUT1 and DAC1_OUT2 as their corresponding mode. The second path is looking in the Pinout & Configuration tab, which summarizes many of the features of the chip on the left hand side under categories such as System Core, Analog, Timers, etc. Under Analog, choose DAC1. Enabling OUT1 and OUT2 will automatically enable the correct pins in the appropriate mode. In order to configure the DAC, we need to find DAC1 under Analog in the list of features under Pinout & Configuration. If you haven’t already, in DAC1 Mode and Configuration, enable OUT1 and OUT2 in Connected to external pin only mode. Then, verify the DAC Out1 Settings and DAC Out2 Settings: ● Output Buffer (Enable) ● Trigger (None) ● User Trimming (Factory trimming) ● Sample And Hold (Sampleandhold Disable) Re-generate your code and return to IDE. Hopefully you remembered to write your code within USER CODE BEGIN and USER CODE END, and it’s all still there! Making Signals Previously, we have read the state of a button, and written the state of an LED. Now we need to initialize and write the state of the DAC to generate signals in an audible frequency range (so we can verify the system with a small speaker). Assign each signal to a different DAC output channel. To initialize the DAC and write data to it, you’ll need more HAL functions. Sections 16.2.3 and 16.2.4 of the HAL Driver User Manual list the functions you will need; they are detailed in Section 16.2.7. Also note that HAL_Delay(…) or a for loop can be used to insert a delay between operations in your code. The details of the usage of HAL_Delay can be found in the HAL Driver User Manual. Before you test your code with a speaker, it is worthwhile to use the ITM interface to verify that it is working as intended. Ensure the Serial Wire Viewer (SWV) is enabled and configured appropriately in the debugger configuration. (See Lab 1.) Since we’ll use the ITM’s data trace functionality this time, no code modifications are required (e.g., to timestamp events as in Lab 1). Start the debugger. Once it pauses execution at the first line of main, ensure that the SWV Data Trace Timeline Graph is visible; find it under the Window > Show View > SWV pull-down menu.Before resuming execution, we need to configure (wrench) and then start recording (red button), just like in Lab 1. However, this time the configuration of the Serial Wire Viewer looks a little different. Enable Comparator 0 and Comparator 1, and write the names of the variables you wish to monitor in Var/Addr. In my case, the variables that hold the current signal values are triangle and saw.You will notice that LED1 blinks when this part of your project is running. Can you figure out why? Making Sounds Now that we’ve verified that the waveforms look about right, it’s time to verify the signal with an oscilloscope. If the signal appears as expected, wire the speaker to the DAC. Note that (1) an Arduino Uno is pictured. Your board and the Arduino Uno have the same external interface (A0-A5 and D0-D15, plus assorted other pins). (2) The speaker that is pictured is different from yours; yours will, however, fit perfectly into the breadboard with the indicated spacing. (3) The resistor is between ground and the speaker in order to both limit the current at the GPIO, as well as the power at the speaker, in order to protect both devices.Making Better Sounds How do the triangle and saw waves sound? Not great. Do they have the desired frequency? Not really, though we can’t really fix this without using timers and interrupts (later!). Next, generate a signal with approximately the same period as above but using the arm_sin_f32() function in the DSP library. (See Lab 1.) As before, trace the values with SWV and check with an oscilloscope before driving the speaker. Part 3: Analog to Digital Conversion Before we configure the board for this part, it is important to establish a bit of context. The code for this lab is straightforward; configuring the ADC properly so it produces valid output, and manipulating that output to recover the real value it represents, is not. ADC Sample Timing To get started, for an overview of how the internal temperature works, read Section 18.4.32 in the STM32L47xxx Reference Manual. (Section 21.4.32 in the STM32L4+ Reference Manual.) Note (2) under Reading the temperature, and find the appropriate sampling time for reading the temperature sensor in the STM32L475VG Datasheet (STM32L4S5VI Datasheet.) Now read Section 18.4.16 in the STM32L47xxx Reference Manual. (Section 21.4.16 in the STM32L4+ Reference Manual.) Note the assumptions about (a) sampling time, and (b) clock frequency. When configuring the board, we’ll have to pick a clock frequency for the processor and for the ADC, and choose the sampling cycle for the ADC. For an example of how ADC sampling cycle relates to sampling time, see Table 67 in the STM32L475VG Datasheet. (Table 73 in the STM32L4S5VI Datasheet.) Between the sampling time requirements for the sensor, and configuration options for the ADC, you should now have enough information to configure your board properly. ADC Output Scaling ADCs convert voltages into a digital representation by comparing the analog input with an internal reference voltage. Sensors have a range of analog output; under ideal circumstances, the sensor and ADC are calibrated such that this range maps onto the full range of the ADC’s digital output. This calibration occurs under an assumption about the internal reference voltage used by the ADC; understanding calibration conditions, and how they differ from operating conditions, is essential for writing software that correctly interprets sensor values (binary numbers scaled according to what is sensed relative to an internal reference voltage) as physical quantities (e.g., temperature). For example, the processor temperature sensor in your development board has been factory calibrated at two temperatures. This means that the behavior of your sensor has been characterized under controlled conditions: if the reference voltage is X, and the temperature is Y, the ADC outputs Z. Given the ADC output at these temperatures, and the assumption that sensor output voltage varies linearly with temperature, we can interpolate to determine operating temperature. Note that these ADC outputs are taken assuming an ADC resolution of 12 bits. In order to properly scale the ADC output for the temperature sensor, we need (a) these constants, and (b) an equation to relate them. The equation you’ve seen; it’s in Section 18.4.32 (Section 21.4.32) of the STM32L47xxx Reference Manual (STM32L4+ Reference Manual), noted above. The equation also lists the various names of constants that you’ll need. Note: the calibration output ADC values are stored in the memory of your processor. The STM32L475VG Datasheet (STM32L4S5VI Datasheet) has more information. Additional notes: ● You’ll need to use the internal voltage reference sensor, too, which has also been calibrated as described above; repeat the above process for the internal reference voltage. Section 18.4.34 in the STM32L47xxx Reference Manual (Section 21.4.34 in the STM32L4+ Reference Manual) is a good place to start. With information on calibration conditions, and where to find calibrated ADC outputs, you should now have the information you need to write code to scale the ADC output for the temperature sensor. ADC Configuration You’ll need to configure two ADCs: one to read the internal temperature sensor, and another to read the internal voltage reference. You’ll find the ADCs on the left side of the Pinout & Configuration tab, under Analog; the STM32L47xxx variant has three, ADC1-ADC3. Each ADC has a number of channels that are available (under Mode); IN1-INX should be disabled in each case. Enable the temperature sensor channel in one ADC, and the Vrefint channel in another. The STM32L4+ variant only has two ADCs, and Vts and Vrefint are both connected to ADC1. Fortunately, there is built-in support for the ADC to iteratively sample each on successive calls to the appropriate functions. To get the ADC configured for this mode of operation, first enable both the temperature sensor and internal VREF channels for ADC1 (under Mode). Then, set the number of conversions to 2 (under Configuration and Parameters Settings). Now enable Discontinuous Conversion and set the number of discontinuous conversions to 1. Set the number of conversions to 2, and confirm that one channel (Vrefint or Vts) is associated with each rank. Finally, the End of Conversion Selection should be set to End of single conversion. (With thanks to Jacoby Roy.) Resolution. Under ADC_Regular_ConversionMode > Rank you’ll find options for Sampling Time. You need to choose a sampling time for each ADC that satisfies sampling time guidelines for the sensors and takes the ADC clock frequency into account. The ADC clock frequency can be changed under the Clock Configuration tab. Note: you can empirically test if your ADC has too short a sampling time. If you increase the sampling time, and the sampled value increases significantly, then you need to give the sensor’s output more time to converge. Reading ADC Output The HAL Driver User Manual describes how to make use of the ADC API in Section 7.2. Note that code generation takes care of configuring the ADC; all we need to do is follow the instructions for how to perform ADC conversion using polling. Polling means waiting for the ADC to finish its conversion; alternatively, interrupts can be used to notify user code when a conversion has been completed. This is the subject of the next lab. If you are using the STM32L4+ variant and discontinuous conversion, note that the first sequence of conversion function calls will use Rank 1 and the second will use Rank 2. Additional conversions will alternate between the two. (With thanks to Jacoby Roy.) Write code to poll each ADC approximately every 200 ms. Verify your choice of ADC sampling time; observe the effect of changing ADC resolution. Scaling ADC Output Now extend your code to calculate temperature from the temperature sensor’s ADC output. Notes: ● You need to read the calibrated ADC outputs from memory; the calibration temperatures are defined as constants, but can also be found in documentation. ● You need to scale the calibrated ADC outputs based on how the internal reference voltage differs from calibration conditions. Temperature sensor output is linear with respect to reference voltage. ● You need to scale the ADC output for the temperature sensor whenever you use an output resolution other than 12-bit to account for the difference in output range. ● And be careful with operator order, data type casting, etc! ● You can check your implementation using the ADC calibration output values. The scaled temperature value is expected to be close to room temperature. A blow dryer will be available in the lab to (carefully) heat the board and observe that internal temperature readings change. Part 4: Putting It All Together Now combine the operation of all of the above in a single application. ● By default, the application should play a fixed sound of your choice (triangle, saw, sine). ● On button press, ○ The LED should turn on, and ○ The sound should change to be a function of the temperature sensor. ● On subsequent button press, ○ The LED should turn off, and ○ The sound should change back to a fixed sound. The fixed sound should rotate among triangle, saw, and sine. At all times, the output to the DAC should be displayed in the data trace timeline graph; you will also be expected to verify the output with an oscilloscope during your demo. Deliverables Your demo is limited to 10 minutes. Be sure to highlight top-level software structure and program flow. When applicable, it is useful to highlight that your software computes correct partial and final values. Your demo will be graded by assessing, for each part above, the correctness of the observed behavior, and the correctness of your description of that behavior. Grading The breakdown of grading is as follows: ● LED lighting on button press ○ 10% ● C implementation of signals ○ 10% triangle ○ 10% saw ○ 10% sine ● C implementation scaling ADC outputs to calculate CPU temperature in degrees C ○ 20% ● C implementation of temperature-dependent output to speaker ○ 20% ● C implementation of final application ○ 20% Each part of the demo will be graded for (a) clarity, (b) technical content, and (c) correctness: ● 1pt clarity: the demo is clear and easy to follow ● 1pt technical content: correct terms are used to describe your software ● 3pt correctness: given an input, the correct output is clearly demonstrated Submission Please submit, on MyCourses, your: ● Source code used to demo (only files you modified, including IOC file).

$25.00 View

[SOLVED] Cs6263/ece 8813 mini project #4

Cyber-Physical Systems Security  Mini Project #4: Secure System Analysis Using Machine LearningObjectiveIn this project, through the lens of machine learning, we aim to predict and validate the genuine behavior of a cyber-physical system, enabling the identification of discrepancies that might arise from cyber-physical attacks. We have prepared two one-hour sessions to help you ramp up on the ML topics and their applications in the CPS security domain. Before moving forward, we strongly recommend watching these sessions. Cyber-Physical Electric Power SystemsFigure 1: Power system components: generation, transmission, and distribution lines . Electric power systems are one of the most critical infrastructures that are increasingly becoming targets for cyber-attacks. In 2015, a cyberattack on Ukraine’s power grid left over 200,000 residents without electricity, marking one of the first power outages attributed to a cyber-physical attack . Another incident in 2019 saw a Western U.S. utility experiencing a denial-of-service attack that momentarily disrupted grid operations . System Under Study: Overview, Data, and Sensor Assumptions In this project, we examine a standard 39-node system depicted in Figure 2, which includes 10 generators and 46 transmission lines (branches). We base our analysis on data collected by various sensors installed throughout the system. These sensors are crucial for gathering information, and we categorize the data they collect into two main types, each with its own security implications: 1. Demand and Generation Data at Buses: This category consists of data gathered by reliable sensors placed at different buses within the system. These sensors provide real-time and accurate information about power demand and generation at these nodes. The reliability of these sensors is ensured, making the data they provide a trustworthy foundation for system analysis. These sensors guarantee a high level of confidentiality, integrity, and availability. 2. Transmission Line Overload Status: We assume unlike the demand and generation data, the information regarding the status of transmission line overloads is more susceptible to security threats, including cyber-attacks. There is a risk that this data could be manipulated, leading to incorrect reporting of overload statuses. It is crucial to scrutinize and verify this data to protect the system from the repercussions of any such attacks. Here, the assumption is that the availability and integrity of these sensors are at risk.Figure 2: The diagram of the 39-node systemFigure 3: Machine-learning-based prediction of transmission line overload Utilizing Machine Learning to Enhance System Security against Data Manipulation Threats Following the recognition of potential vulnerabilities in the data of transmission line overload statuses, in this project we aim to leverage machine learning as a tool to mitigate these risks. The objective is to develop a machine learning model that can analyze the trusted demand and generation data from the buses and use this information to reliably predict actual transmission line overload scenarios (see Figure 3). By doing so, the model will serve as a safeguard, identifying discrepancies or anomalies in branch overload status reports that could indicate cyber intrusions or attacks. 1 Part 1: Data Exploration [20 Points] In the first step, let’s explore the given datasets to get familiar with it. First, download the 39-bus system measurement data from this link. The 39-bus-measurements directory you downloaded contains the following data: 1. Demand and generation data in train features.csv and test public features.csv. This data contains active power demands in megawatt MW (from Pd bus1 to Pd bus39), reactive power demands in megaVar MVAR (from Qd bus1 to Qd bus39), active power generations in megawatt MW (from Pggen1 to Pg gen10), and finally reactive power generations in megaVar MVAR (from Qg gen1 to Qg gen10). 2. Transmission line overload status data in train labels.csv and test public labels.csv. This data contains labels indicating whether each branch (from is branch1 Overloaded to is branch46 Overloaded) is overloaded (1) or not (0), with each row representing a different data point. We will utilize this data in later sections for training and testing our machine learning model. Before we move on, it is important to explore the data. In this section of the project, you will be answering questions on Canvas: Please see the Canvas “Mini Project 4 – QA” Assignment under “Mini Projects 3 and 4” for the questions to answer. You can submit the quiz multiple times so feel free to look at the questions at any time, but please note that only the latest submission will be graded and considered. NOTE: Before you jump into the coding part, please check out Section 5 on how you can setup the development/testing environment for this project. This is necessary to ensure we can reproduce your results and maintain compatibility across all submissions. Moreover, if you are new to Python, machine learning, and jupyter notebook, it can help you setup the environment very quickly. In Part 1, it is recommended to use jupyter notebook for quick development/testing while Part 2 requires you to complete the code snippets to develop an acceptable solution. 1. Using the pandas library in Python, load the file train features.csv into a DataFrame named df train features. Then, answer the following questions: (a) What is the shape of df train features? Specifically, how many data points (rows) and features (columns) does this dataset contain? You can find this out using df train features.shape. [3 Points] (b) How many columns (features) in the dataset are entirely zeros? To identify these, you may utilize the pandas describe() method to review the data’s summary statistics within the DataFrame. [4 Points] (c) How many columns (features) in the dataset have constant values (excluding any columns that are entirely zeros)? [4 Points] (d) Which column (feature) in the dataset exhibits the largest variation? To determine this, use the standard deviation (std) as your measure of variation. [4 Points] 2. Using the pandas library in Python, load the file train labels.csv into a DataFrame named df train labels. Then, answer the following questions: (a) Identify the branch with the largest overload frequency. This means the branch that has the highest proportion of its data points marked as overloaded (1). Similarly, identify the branch with the smallest overload frequency. This indicates the branch that has the lowest proportion of its data points marked as overloaded (1). This exercise will help you understand data balance and the distribution of overloaded conditions across various branches. [5 Points] 2 Part 2: Completing the Provided Python Code Template In this section of the project, we aim to construct a machine learning model using the PyTorch library to predict the overload status of power system branches, a crucial task for validating the genuine behavior of our cyber-physical system, enabling the identification of discrepancies that might arise from cyber-physical attacks. To streamline the development process, a Python code template is provided here. First, clone the repository to your local machine or download it as a ZIP file and unzip it to access the code. The extracted folder, named mp4-machine-learning-template, includes the following directories: • src: This directory hosts the Python source code templates. You will need to review and complete these files as outlined in subsequent sections. Ensure you retain these files within this directory throughout the development process to guarantee the entire code functions correctly. Avoid moving the files out of this directory. Instead, modify and finalize them within this folder. • data: This directory contains the data. Specifically, the four CSV data files reside in the relative path data/39-bus-measurements/. • model: This folder will be empty upon downloading the code template. After training your model, the model and parameter files should be saved here.1. Training: During the training stage, we teach the machine learning model to identify patterns in the data. This is done by running the train.py script. See Figure 4. 2. Testing: After the training stage is complete, we evaluate the model’s ability to learn effectively. This evaluation takes place in the testing stage, which is carried out by executing the test.py script. See Figure 5. The code template in the src directory includes several key components: • Helper Script Files (DO NOT CHANGE): – train.py: Manages the model training process. See Figure 4. – test.py: Evaluates the model’s performance on test data. See Figure 5. • DataReader: Handles reading and preprocessing of input data, including tasks like loading data from CSV files, normalizing the data, and formatting it for neural network processing.Figure 4: Training workflowFigure 5: Testing workflow • PowerSystemNN: Defines the neural network architecture tailored for analyzing power system data, including specifying layers and structural elements of the model. See Figure 3. • Trainer: Oversees the neural network’s training process, including setting up the loss function and optimizer and executing the training routine with the preprocessed data. • Evaluator: Utilized for evaluating the trained model’s performance, calculating metrics such as accuracy, precision, and recall to assess the model’s predictive accuracy regarding power system branch overloads. (DO NOT CHANGE) These components collectively provide a robust framework for your project, guiding you through a structured approach to building and evaluating your machine learning model. Your challenge is to complete and potentially enhance these components, implementing the necessary logic to achieve the project’s objectives. The following detailed steps will help you navigate through the process efficiently. To begin, please download the template code from the link above and follow the outlined steps: • Note: In some of the subsequent steps, such as identifying important features in Step 3 or designing the neural network in Step 4, you should begin with an initial working solution. Once you have completed the entire training and testing workflow, you can return to these sections for fine-tuning. 2.1 Step 1: Review train.py [0 Points] The train.py script is complete and ready-to-use. This file serves as the core of the machine learning training process for predicting the overload status of branches in our electric power system using neural networks. It outlines the main steps required for training a model, including loading and preprocessing data, initializing and training the neural network, and saving the trained model along with its scaler and selected feature columns for future use. See Figure 4. You are expected to review and understand this script to grasp the overall structure and flow of the model training pipeline without needing to modify it. 2.2 Step 2: Review test.py [0 Points] The test.py script is also complete and ready-to-use. This file is designed to evaluate the performance of a neural network model on a test dataset. It serves as the counterpart to the training process outlined in test.py, focusing on model evaluation rather than model training. See Figure 5. You are expected to review and understand this script to grasp the overall structure and flow of the model testing pipeline without needing to modify it. 2.3 Step 3: Complete data reader.py [5 Points] The data reader.py script is a critical component of the machine learning pipeline, designed to handle data loading, preprocessing, normalization, and conversion to the appropriate formats for training neural network models. You are tasked with completing the find important features method within this class.def _find_important_features(self, df_feature: pd.DataFrame, df_labels: pd.DataFrame) -> list: selected_columns = [] # COMPLETE HERE return selected_columnsNote: Please DO NOT alter any other class methods or features in data reader.py. Only complete or modify the find important features method. 2.4 Step 4: Complete power system nn.py [20 Points] For the power system nn.py script, you are tasked with designing and implementing the neural network architecture for predicting the overload status of branches in an electric power system using PyTorch (see Figure 3). This involves specifying the structure of the neural network, including the number of layers, the size of each layer (i.e., the number of neurons), and the activation functions to be used. The neural network architecture provided here serves as an example; students are encouraged to experiment with different configurations to optimize model performance. Here are more hints: 1. Design the Neural Network Architecture: You will define the neural network’s architecture in the init method. This includes deciding on the number of layers, the number of neurons in each layer, and the type of each layer (e.g., fully connected layers, convolutional layers for other types of data, etc.). You should also choose appropriate activation functions for each layer to introduce non-linearity into the model. 2. Implement the Forward Pass: In the forward method, you will specify how the data flows through the network. This involves applying the layers and activation functions defined in the init method to the input tensor x and returning the output tensor. 3. Activation Functions: Consider the role of different activation functions (e.g., ReLU, Sigmoid, Tanh) and where they might be most effectively applied within your network to model complex relationships in the data. 4. Experimentation: Experiment with different network configurations and parameters (e.g., different numbers of layers, different numbers of neurons in each layer, different activation functions) to find a setup that works well for the task. 5. The example layer and activation function definitions in the comments are provided as hints. Replace number of neurons and other placeholders with specific values based on your design decisions. 6. You should begin with an initial working solution (such as 1-2 hidden layers and a reasonable number of neurons in each layer, etc.). Once you have completed the entire training and testing workflow, you can return to this section for fine-tuning. 7. In this link, there is an example of a neural network that includes the initialization and forward pass methods. 8. Please see the provided power system nn.py file for more hints/comments. 2.5 Step 5: Complete trainer.py [20 Points] In the trainer.py script, you are tasked with implementing the functionality required to train a neural network model, including initializing the training components and executing the training loop. This involves setting up the optimizer and loss function in the init method and managing the data loading, model training iterations, and optimization steps in the train model method. Here are more hints: 1. Initialize Training Components: In the init method, you must initialize the model, loss function, and optimizer. This involves specifying the type of loss function suitable for the task (e.g., Binary Cross-Entropy for binary classification tasks) and choosing an optimizer (e.g., Adam) with an appropriate learning rate. 2. Implement the Training Loop: The train model method is responsible for organizing the training process. This includes setting up a DataLoader for batching the training data, iterating over the dataset for a defined number of epochs, performing forward and backward passes through the model, computing the loss, and updating the model’s weights. 3. In this link, there is an example of setting up the optimizer and loss function. 4. Please see the provided trainer.py file for more hints/comments. 2.6 Step 6: Review evaluator.py [0 Points] The evaluator.py script, which is complete and ready-to-use, evaluates the performance of your trained neural network model on test datasets. Specifically, the evaluate method within the Evaluator class will assess the model’s prediction accuracy, precision, and recall, leveraging the sklearn.metrics library for these calculations. You are expected to review and understand this script without needing to modify it. Here are more details: 1. The Evaluate Method: In the evaluate method, we predict outcomes using the trained model on a dataset provided by a DataReader instance. The method then calculates and reports the accuracy, precision, recall, and F1 score for the predictions. 2. Using of sklearn.metrics: The accuracy score, precision score, recall score, and f1 score functions from sklearn.metrics are used to calculate the respective metrics. These metrics provide insights into the model’s performance, with accuracy indicating the overall correctness of predictions, precision showing the correctness of positive predictions, and recall reflecting the model’s ability to identify all actual positives: • Accuracy: Accuracy measures the proportion of total correct predictions (both true positives and true negatives) out of all predictions. Generally, a higher accuracy is better, but it can be misleading in the case of imbalanced datasets where one class dominates.3. Calculating Metrics for Each Branch and on Average: We calculate these metrics for each power system branch individually and also compute average metrics across all branches to get a holistic view of the model’s performance. 3 Deliverable and Submission Instructions Create a zip file named --mp4.zip (e.g., Tohid-Shekari-mp4.zip), that includes all your files and submit it on Canvas. If you make multiple submissions to Canvas, a number will be appended to your filename. This is inserted by Canvas and will not result in any penalty. • Note 1: Please ensure you use the virtual environment with the specified package versions in the requirements.txt for training and generating the submission files (see Section 5.2). This is necessary to ensure we can reproduce your results and maintain compatibility across all submissions. • Note 2: Failure to follow the submission and naming instructions will cause 20% points loss.--mp4.zip |– src |– data_reader.py |– evaluator.py |– power_system_nn.py |– test.py |– train.py |– trainer.py |– model |– model.pth |– scaler.joblib |– selected_feature_columns.txt |– public_points.txt4 Model Evaluation [35 Points] Your model’s performance will be assessed using two test datasets: a public dataset that has been shared with you, and a private dataset that has not been shared and will be used to evaluate the model’s generalizability. The overall performance of your model will be determined by the combined performance on these two datasets, calculated as follows: (Xpublic + Y public) public Total Points = Ppublic × × Z 2 private × (Xprivate + Y private) × Zprivate (1) 2 where Ppublic = 15 and Pprivate = 20. Additionally, X, Y , and Z will be calculated according to Tables 1, 2, and 3. This formula exists in the test.py script and it will automatically calculate your score on the public test dataset. Table 1: Lookup table for calculation of X Average Accuracy Across All Branches X [95%,100%] 1 [90%,95%) 0.9 [85%,90%) 0.7 Below 85% 0 Table 2: Lookup table for calculation of Y Average F1 Score Across All Branches Y [90%,100%] 1 [85%,90%) 0.9 [80%,85%) 0.7 Below 80% 0 Table 3: Lookup table for calculation of Z Number of Feature Columns Used Z {1,··· ,53} 1 {54,··· ,64} 0.9 {65,··· ,80} 0.7 Above 80 0 5 Appendix: Installation and Getting Started with Python, PyTorch, scikit-learn, and Jupyter Notebook 5.1 Installing Python For this course project, you will need Python 3, with the latest versions (Python 3.10 or later) recommended for optimal performance and compatibility. There are plenty of resources you can find publicly on how you can install Python in your Windows, MacOS, or Linux machine. 5.2 Setup the Project Environment Using a virtual environment helps isolate dependencies and avoid conflicts with other projects, ensuring that your code runs smoothly and consistently on different systems. Please ensure you use the virtual environment described in this section with the specified package versions in the requirements.txt for training and generating the submission files. This is necessary to ensure we can reproduce your results and maintain compatibility across all submissions. Once you successfully installed Python in your machine, clone this github repo which includes all the required datasets and code skeletons in parts 1 and 2 of the project. Navigate to the source directory with terminal and run this command to create a virtual environment for the project:python3 -m venv .venvBefore you can start using the virtual environment, you need to activate it. Activation is necessary because it temporarily modifies the PATH environment variable to include the scripts in the virtual environment’s bin (or Scripts on Windows) directory. To activate the virtual environment, run (first one in Windows, the second one in Mac/Linux):..venvScriptsactivatesource .venv/bin/activateWith the virtual environment activated, install the dependencies listed in your requirements.txt file by running:pip3 install -r requirements.txt5.2.1 Optional: Setting Up Jupyter Notebook for Facilitating the Data Exploration in Part 1 If you need jupyter notebook, first run this command to create a kernel based on the virtual environment and then use the next command to launch an instance of it.python -m ipykernel install –user –name=mp4env –display-name=”mp4env”jupyter notebook6 Appendix: Distribution of Points The distribution of points for this project is illustrated in Figure 6. Model s Performance on Private DatasetFigure 6: Distribution of points Resources • PyTorch Documentation: https://pytorch.org/docs/stable/index.html • scikit-learn Documentation: https://scikit-learn.org/stable/ • Pandas Documentation: https://pandas.pydata.org/docs/ • Jupyter Notebook Documentation: https://docs.jupyter.org/en/latest/

$25.00 View

[SOLVED] Cs-6210 project-3

In this project you will implement a simple distributed service using gRPC. This service will simulate an online store which connects customer queries for items with vendors that offer those items.To get started, review the README located in the attached project3-master.zip. Follow the instructions provided to create a working implementation of the online store. When you’re submitting the project on Gradescope, please make sure you adhere to the directory structure mentioned in the README. Refer to this link on how to create a group submission on Gradescope.### Big Picture– In this project, you are going to implement major chunks of a simple distributed service using [grpc](http://www.grpc.io). – Learnings from this project will also help you in the next project as you will become familiar with grpc and multithreading with threadpool.### Overview – You are going to build a store (You can think of Amazon Store!), which receives requests from different users, querying the prices offered by the different registered vendors. – Your store will be provided with a file of of vendor servers. On each product query, your store is supposed to request all of these vendor servers for their bid on the queried product. – Once your store has responses from all the vendors, it is supposed to collate the (bid, vendor_id) from the vendors and send it back to the requesting client.### Learning outcomes – Synchronous and Asynchronous RPC packages – Building a multi-threaded store in a distributed service## Environment SetupTo set up your environment, you can choose one of the following methods: – Option 1: Follow this [link](https://grpc.io/docs/languages/cpp/quickstart/) for a cmake based setup on your host machine – Option 2: Using Docker### Option 2: Setting Up the Docker Environment (Skip the whole option 2 section if you choose option 1)If you prefer to use a Docker environment for Project 3, you can either use the pre-build docker image or build and run your own image.#### Option 2.1: Pre-build Docker image “` docker pull dcchico/aos_project3 docker run -it dcchico/aos_project3 “`#### Option 2.2: Build and Run your own image the following Dockerfile (Skip this step if you use Option 2.1: the pre-build docker image)Copy the code below into a file and name it “Dockerfile”.“`dockerfile FROM ubuntu:22.04 ENV MY_INSTALL_DIR /.local ENV PATH $MY_INSTALL_DIR/bin:$PATH RUN apt update && apt install -y cmake build-essential autoconf libtool pkg-config git zip unzip && git clone –recurse-submodules -b v1.58.0 –depth 1 –shallow-submodules https://github.com/grpc/grpc /grpc && mkdir -p $MY_INSTALL_DIR WORKDIR /grpc/cmake/build RUN cmake -DgRPC_INSTALL=ON -DgRPC_BUILD_TESTS=OFF -DCMAKE_INSTALL_PREFIX=$MY_INSTALL_DIR ../.. && make -j 4 && make install WORKDIR /project3 COPY ./project3-template /project3 CMD /bin/bash “` Building and Running the Docker ImageBuild the Docker image: “`docker build -t project3-docker .“` Run the Docker container: “`docker run -it project3-docker“`#### Troubleshooting Undefined Reference to grpc::Status::OK If you see errors like undefined reference to grpc::Status::OK while running make, add gRPC::grpc++_reflection to the linked libraries list for run_tests in /tests/CMakeLists.txt. Lines 10 to 17 should look like this: “` add_executable(run_tests client.cc run_tests.cc product_queries_util.h) target_link_libraries(run_tests Threads::Threads gRPC::grpc++ gRPC::grpc++_reflection p3protolib) add_dependencies(run_tests p3protolib) “`## How You Are Going to Implement It ( Step-by-step )1. Make sure you understand how GRPC- synchronous and asynchronous calls work. Understand the given helloworld [example](https://github.com/grpc/grpc/tree/master/examples/cpp/helloworld). You will be building your store with asynchronous mechanisms ONLY. 2. Establish asynchronous GRPC communication between – – Your store and user client. – Your store and the vendors. 3. Create your thread pool and use it. Where will you use it and for what? Upon receiving a client request, you store will assign a thread from the thread pool to the incoming request for processing. – The thread will make async RPC calls to the vendors – The thread will await for all results to come back – The thread will collate the results – The thread will reply to the store client with the results of the call – Having completed the work, the thread will return to the thread pool 4. Do you have your user client request reaching to the vendors now? And can you see the bids from the different vendors at your user client end? Congratulations you almost got it! Now use the test harness to test if your server can serve multiple clients concurrently and make sure that your thread handling is correct.## Keep In Mind 1. Your Server has to handle – Multiple concurrent requests from clients – Be stateless so far as client requests are concerned (once the client request is serviced it can forget the client) – Manage the connections to the client requests and the requests it makes to the 3rd party vendors. 2. Server will get the vendor addresses from a file with line separated strings 3. Your server should be able to accept `command line input` of the vendor addresses file, `address` on which it is going to expose its service and `maximum number of threads` its threadpool should have. 4. The format of the invocation is:./store 5. Remember to add references to all the resources you have used while working on the project.## Given to You 1. run_tests.cc – This will simulate real world users sending concurrent product queries. This will be released soon to you. 2. client.cc – This will be providing ability to connect to the store as a user. 3. vendor.cc – This wil act as the server providing bids for different products. Multiple instances of it will be run listening on different ip address and port. 4. `Two .proto files` – store.proto – Comm. protocol between user(client) and store(server) – vendor.proto -Comm. protocol between store(client) and vendor(server)## How to run the test setup – Go to project3 directory and build the program. – Three binaries would be created in the bin folder – `store`,`run_tests` and `run_vendors`. (Note that the location of bin folder depends on how you build the program.) – First run the command `./run_vendors vendor_addresses.txt &` to start a process which will run multiple servers on different threads listening to (ip_address:ports) from the file given as command line argument. – Then start up your store which will read the same address file to know vendors’ listening addresses. Also, your store should start listening on a port(for clients to connect to) given as command line argument. – Then finally run the command `./run_tests $IP_and_port_on_which_store_is_listening $max_num_concurrent_client_requests` to start a process which will simulate real world clients sending requests at the same time. – This process read the queries from the file `product_query_list.txt` – It will send some queries and print back the results, which you can use to verify your whole system’s flow.## Grading This project is not performance oriented, we will only test the functionality and correctness.Below is the rubric:**Total Possible Score:** 12| Score | Reason | | —– | —— | | +2 | Code compiles | | +4 | Query output is correct | | +3 | Threadpool management | | +1 | `store-server` operates in `async` fashion | | +1 | `store-client` operates in `async` fashion | | +1 | Readme |## Deliverables Please follow the instructions carefully. The folder you hand in must contain the following: – `Readme.txt` – text file containing a brief description of both your threadpool implementation and the communicaiton pipelines you have built in the store (also include anything about the project that you want to tell the TAs). – `CMakeLists.txt` – You might need to change it if you add more source files. – `Store source files` – store.cc(must) containing the source code for store management – `Threadpool source files` – threadpool.h(must), containing the source code for threadpool management – You can add supporting files too(in addition to above two), if you need to keep your code more structured, clean etc.**Submission Directory Structure:**Readme.txt src/CMakeLists.txt src/store.cc src/threadpool.h src/any_additional_supporting_files.*You must use the collect_submission.py to create the submission zip file. Submit the zip file in gradescope. You can verify your submission using the autograder in gradescope.# FAQFAQ can be found [here](faq.md). 

$25.00 View

[SOLVED] Cs6035 projects / cryptography 2025

CS 6035Projects / CryptographyThere is no required VM for this project. All that is required is a Python development environment. Make certain that you are using Python 3. To check your version of Python, open a command prompt and run the command:python –version (You may need to use the python3 command instead.)For the established algorithms that you may find it necessary to use, you are allowed to reference and implement pseudocode with citation (a comment in your code will suffice). What is Pseudocode? https://en.wikipedia.org/wiki/PseudocodeUNDER NO CIRCUMSTANCES should you copy/paste code into the project. Doing so is an honor code violation (not to mention a real world security concern) and will result in a zero (refer to the syllabus for more information).You will complete the provided Python file project_cryptography.py and submit it to the autograder in Gradescope.https://github.gatech.edu/pages/cs6035-tools/cs6035-tools.github.io/Projects/RSA_Cryptography/       1/2 3/23/25, 7:50 AM               Cryptography | CS 6035For each task we have provided prompts for further discussions. There will be threads created in Ed where students can discuss these topics. Participation is optional and will not be graded.Good luck!TABLE OF CONTENTSDisclaimer: You are responsible for the information on this website. The content is subject to change at any time. © 2024 Georgia Institute of Technology. All rights reserved.https://github.gatech.edu/pages/cs6035-tools/cs6035-tools.github.io/Projects/RSA_Cryptography/                                                                               2/2 CS 6035Projects / Cryptography / IntroductionProject FilesYou can download project files in Canvas/Assignments/Cryptography.Important Notes:Provided Code:All necessary starter code and unit tests for each task is located in the corresponding folder in the provided zip file.Python Packages:pip install pycryptodomeUnit Tests:For each task you are also given a unit test file (it starts with test_)to help you develop and test your code. We encourage you to read up on Python unit tests, but in general, the syntax should resemble either:     python -m unittest test_task_rsa_encrypt_messageor:https://github.gatech.edu/pages/cs6035-tools/cs6035-tools.github.io/Projects/RSA-Cryptography/Intro 1/2 3/23/25, 7:47 AM               Introduction | CS 6035python test_task_rsa_encrypt_message.pyHowever, keep in mind that passing the unit test(s) does NOT guarantee that your code will pass the autograder!https://github.gatech.edu/pages/cs6035-tools/cs6035-tools.github.io/Projects/RSA-Cryptography/Intro                                                                        2/23/23/25, 7:48 AM                                                                                              Task 2: RSA Warmup | CS 6035CS 6035Projects / Cryptography / Task 2: RSA WarmupNow that we’ve reviewed a symmetric key cryptographic algorithm, we can move on to the world of asymmetric key cryptography. RSA is perhaps the best known example of asymmetric cryptography.In RSA, the public key is a pair of integers (N, e), and the private key is an integer d.To encrypt integer m with public key (N, e), we use the formula:To decrypt cipher integer c with private key d, we use the formulaIn this task you will write the code to perform the encryption and decryption steps for the RSA cryptographic algorithm. Finally, you will write the code necessary to calculate the private key d when given the factors of the public key N (i.e. p and q).def rsa_decrypt_cipher(n: int, d: int, c: int) -> int:          # TODO: Write the necessary code to get the message (m) from the cipher (c)     m = 0     return mdef rsa_encrypt_message(m: int, e: int, n: int) -> int:     # TODO: Write the necessary code to get the cipher (c) from the message (m)     c = 0     return cdef rsa_calculate_private_key(e: int, p: int, q: int) -> int:     # TODO: Write the necessary code to get the private key d from     # the public exponent e and the factors p and q     d = 0     return dhttps://github.gatech.edu/pages/cs6035-tools/cs6035-tools.github.io/Projects/RSA_Cryptography/RSA_warmup.html         1/2 3/23/25, 7:48 AM Task 2: RSA Warmup | CS 6035You will write your code in the specified function stub(s) found in the provided project_cryptography.py file. When ready, submit this file to the Project Cryptography autograder in Gradescope.Did you try to decrypt a cipher by using a line of Python code something like this: m = c ** d % n?Did it work? (Hint: It did not.) Why not? After all, the math is correct.3/23/25, 7:49 AM                                                                                           Task 3: Factor 64-bit Key | CS 6035CS 6035Projects / Cryptography / Task 3: Factor 64-bit KeyModern day RSA keys are sufficiently large that it is impossible for attackers to traverse the entire key space with limited resources. But in this task, you’re given a unique set of RSA public keys with a relatively small key size (64 bits).Your goal is to get the factors (p and q) of each key. You can use whatever methodology you want. Your only deliverable is a formatted json file containing p and q. To get your unique set of keys, you must update the task.py file located in the task folder with your 9-digit GT ID, and then run it. Find the section below in the provided task_factor_64_bit_key.py file:##############################################   # Change this to your 9-digit Georgia Tech ID!STUDENT_ID = ‘123456789’##############################################Running the command “python task_factor_64_bit_key.py” should output your assigned keys. Once you’ve calculated your p and q values, enter them into the function stub for this task. NOTE: It doesn’t matter which value you specify as p and which value you specify as q.def rsa_factor_64_bit_key() -> typing.Dict[str, typing.Dict[str, int]]:return {         ‘test_1’: {‘p’: 0,‘q’: 1},‘test_2’: {‘p’: 0,‘q’: 1},‘test_3’: {‘p’: 0,‘q’: 1},https://github.gatech.edu/pages/cs6035-tools/cs6035-tools.github.io/Projects/RSA_Cryptography/RSA_factor_64_bit_key.html          1/2 3/23/25, 7:49 AM           Task 3: Factor 64-bit Key | CS 6035‘test_4’: {‘p’: 0,‘q’: 1},‘test_5’: {‘p’: 0,‘q’: 1}}You will write your code in the specified function stub(s) found in the provided project_cryptography.py file. When ready, submit this file to the Project Cryptography autograder in Gradescope.If 64-bit keys aren’t safe, then what size is appropriate? Is there a trade-off between size and performance?Task 4: Weak Key Attack | CS 6035CS 6035Projects / Cryptography / Task 4: Weak Key AttackRead the paper “Mining Your Ps and Qs: Detection of Widespread Weak Keys in Network Devices”, which can be found at: https://factorable.net/weakkeys12.extended.pdf. The essay is essential to understanding this task. Do not skip it, do not skim it, read the whole of it.You are given a unique RSA public key, but the RNG (random number generator) used in the key generation suffers from a vulnerability described in the paper above. In addition, you are given a list of public keys that were generated by the same RNG on the same system. Your goal is to write the code to get the unique private key (d) from your given public key (N, e) using only this provided information.def rsa_weak_key_attack(given_public_key_N: int, given_public_key_e: int, public_key_list: typing.List[int     # TODO: Write the necessary code to retrieve the private key d from the given public     # key (N, e) using only the list of public keys generated using the same flawed RNG     d = 0     return dYou will write your code in the specified function stub(s) found in the provided project_cryptography.py file. When ready, submit this file to the Project Cryptography autograder in Gradescope.Have you ever heard the saying, “Never roll your own crypto?” What are some ways (besides this particular attack – we don’t want you to give too much away) that doing so can cause unintended problems? Can you point to any specific examples or known exploits?Task 5: Broadcast Attack | CS 6035CS 6035Projects / Cryptography / Task 5: Broadcast AttackA message was encrypted with three different 1,024-bit RSA public keys N_1, N_2, and N_3, resulting in three different ciphers c_1, c_2, and c_3. All of them have the same public exponent e = 3.You are given the three pairs of public keys and associated ciphers. Your job is to write the code to recover the original message.def rsa_broadcast_attack(N_1: int, c_1: int, N_2: int, c_2: int, N_3: int, c_3: int) -> int:      # TODO: Write the necessary code to retrieve the decrypted message# (m) using three different ciphers (c_1, c_2, and c_3) created     # using three different public key N’s (N_1, N_2, and N_3)m = 0return mYou will write your code in the specified function stub(s) found in the provided project_cryptography.py file. When ready, submit this file to the Project Cryptography autograder in Gradescope.In addition to the low public exponent being used, this attack is possible because a textbook implementation of RSA is being used. In the real world, there are common mitigating tactics used.What are some examples? Why else are they important?Task 6: Parity Attack | CS 6035CS 6035Projects / Cryptography / Task 6: Parity AttackBy now you have seen that RSA treats messages and ciphers as ordinary integers. This means that you can perform arbitrary math with them. And in certain situations a resourceful hacker can use this to his or her advantage. This task demonstrates one of those situations.Along with an encrypted message (c), you are given a special function that you can call – a parity oracle. This function will accept any integer value that you send to it and decrypt it with the private key corresponding to the public key that was used to encrypt the given cipher (c). The return value of the function will indicate whether this decrypted value is even (true) or odd (false). Armed with this function and a little modular arithmetic, it is possible to crack the encrypted message. Your goal is to write the code necessary to decrypt the original message (m) from the given cipher (c).def rsa_parity_oracle_attack(c: int, N: int, e: int, oracle: Callable[[int], bool]) -> str:     # TODO: Write the necessary code to get the plaintext message# from the cipher (c) using the public key (N, e) and an oracle     # function – oracle(chosen_c) that will give you the parity# of the decrypted value of a chosen cipher     #(chosen_c) value using the hidden private key (d)m = 42# Transform the integer value of the message into a human readable form     message = bytes.fromhex(hex(int(m_int)).rstrip(‘L’)[2:]).decode(‘utf-8’)return messageYou will write your code in the specified function stub(s) found in the provided project_cryptography.py file. When ready, submit this file to the Project Cryptography autograder in Gradescope.https://github.gatech.edu/pages/cs6035-tools/cs6035-tools.github.io/Projects/RSA_Cryptography/RSA_parity_oracle_attack.html      1/2 3/23/25, 7:49 AM           Task 6: Parity Attack | CS 6035This task is a simplified example, but can you see how some potentially useful information may be inadvertently leaked by something (i.e. a protocol)? Can you find any examples?CS 6035Projects / Cryptography / Task 7: Padding AttackThe Advanced Encryption Standard (AES) is a set of standards for encryption set by the U.S. National Institute of Standards and Technology. One of these standards is the Cipher Block Chaining (CBC).CBC uses a fixed length set of bits known as a block, a unique binary sequence known as an Initialization Vector (IV), and a key. The encryption is accomplished in the following sequence.The chaining part comes into play when encrypting multiple blocks. When working on the next block you follow similar steps with one main difference.The formula is as follows:Decryption works in reverse.The formula is as follows:For this task we will be working with an attack known as the padding oracle attack. The padding oracle works under the idea that the server is leaking information about the padding. With this information it is possible to both decrypt and encrypt messages.For this one section of the assignment you will be asked to use a library outside of the standard. In this task you will use the pycryptodome library. This can be manually downloaded from this link https://github.com/Legrandin/pycryptodome. Alternatively it can be downloaded through pip with the following command: pip install pycryptodomeThis task will be the only outside library used.The first 2 steps of this extra credit will be using a simplified version of padding. In a real world application blocks will be in bits and will typically use x00 or something similar depending on what standard is being used.Step 1 of this task is to write a function that can encrypt a short message. You may use pycryptodome’s built in encrypt function, however you must build the padding yourself.def cbc_encrypt_128(key: bytes, IV: bytes, m: str) -> str:     # TODO: Write the necessary code to encrypt the message# (m) using the provided key and IV     # the necessary block length is 128 bits# pad with the byte ‘x00’     # Do Not modify code above this line     # Code Below Here     c = 0     return b64encode(c).decode(“utf-8”)Step 2 of this task is to write a function that can decrypt a short message. You may use pycryptodome’s built in decrypt function.def cbc_decrypt_128(key: bytes, IV: bytes, c: bytes) -> str:     # TODO: Write the necessary code to decrypt the cipher# (c) using the provided key and IV     # Do Not modify code above this line     # Code Below Here     m = 0     return mStep 3 of this task is to write one of the core functions of an oracle which will test if the padding follows pkcs guidelines. This check is often the information that the oracle can leak. For this task you must assume that there will always be at least 1 byte of padding, but there does not always have to be a message attached.def check_padding(padding) -> bool:     # TODO: Write the necessary code to check     # if the padding matches PKCS standards     is_pkcs_padded = “This variable should be a bool value”return is_pkcs_paddedThese steps can all be tested using the test_task_cbc_ python files. You can do so with the following commands:python test_task_cbc_decrypt.py python test_task_cbc_encrypt.py python test_task_cbc_pkcs.pyhttps://en.wikipedia.org/wiki/Padding_oracle_attack https://www.pycryptodome.org/You will write your code in the specified function stub(s) found in the provided project_cryptography.py file. When ready, submit this file to the Project Cryptography autograder in Gradescope. 

$25.00 View

[SOLVED] Csc 354 systems programming assign #5

For this assignment, you are to write a linker/loader for the object programs produced by your assembler using the format described in the text book.  Write the linker/loader as a separate program.  Do not include it as a separate part of the assembler.  This is to be a stand-alone program. Input to the linker/loader will be a variable list of object program names.  If my executable program were named linkghh I would type the following on the command line: linkghh  prog1.obj  prog2.obj  prog3.obj This would link prog1, prog2 and prog3 together. You must read the names of the object programs from the command line to receive full credit. Output will be the absolute machine code that would be in memory following the loading and linking process.  Assume a load address of 03300 (a 5 digit hexadecimal value).  Output is to be written to a file named MEMORY.DAT and displayed on the monitor in a format similar to the following: 0  1  2  3  4  5  6  7  8  9  A  B  C  D  E  F03300    XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX03310    XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX03320    XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX XX.. Execution begins at address XXXXXX Each XX pair is replaced by 1 byte of object code produced by the linker/loader.  Signify unknown memory locations by question marks.  This is needed when storage is reserved but not initialized (i.e. RESW 1 would produce 3 question mark pairs.) You may assume that the output file will have at most 50 lines.  Display the output file to the monitor in a neat and readable format. 

$25.00 View

[SOLVED] Csc354 systems programming assignment 3 & 4 (pass 1 & pass 2)

The following is a list of assembler features that you are expected to implement for your assembler. Instruction SetAll the SIC/XE instructions listed in the appendix. Instruction Formats1, 2, 3, 4 AddressingSimple (with or without indexing), Indirect, Immediate Additional FeaturesLiteral operands (types X & C), Modules, Comments Assembler DirectivesSTART END BYTE WORD RESB RESW BASE EQU EXTDEF EXTREF Error Detection (15 bonus points for pass 1 and 2)You can assume that each source program is syntactically correct.  If you wish to earn bonus points perform error checking on the following:  A file ‘opcodes’ will contain all the instructions that are available in the assembly language.  The file is in the format MNEMONIC OPCODE FORMAT(example: LDA 00 3 {3 means 3 or 4}). Input to pass 1 will consist of a free format SIC/XE source program. Read the name of the file containing the source program from the command line.  Input to pass 2 will be the intermediate file produced by pass1.  Output from each pass of the assembler will consist of the following.Pass 1 Pass 2 Pass 1 is due Wednesday, October 30Pass 2 is due Tuesday, December 4

$25.00 View

[SOLVED] Csc 354 – assignment #2 write a complete module that will be used as part of your sic/xe assembler to evaluate the operand field

Write a complete module that will be used as part of your SIC/XE assembler to evaluate the operand field of an assembly language statement. Basic Algorithm The expression file will contain one expression per line similar to the following (0 or more leading spaces): @GREEN                                                       Indirect Addressing – use RFLAG value#GREEN                                                         Immediate Addressing – use RFLAG valueGREEN,X                                                       Indexed Addressing – use RFLAG value#9                                                                    Immediate Addressing – Absolute valueGREEN+YELLOW                                        VALUE + VALUE and RFLAG + RFLAGGREEN–15                                                     VALUE – 15 and RFLAG – Absolute value=0cABC                                                          Character Literal – 1 character per byte=0x5A                                                             Hexadecimal Literal – 2 hexadecimal digits per byte Rules for evaluating the relocatability of an expression:  Literal Table Make sure that each module only contains items/operations directly related to that module. Fully document all parts of your program: All output should be in an easy to understand format. All error messages must provide as much detail as possible.    Expression Processing Example SYMS.DAT                                                                                                    Expression File PURPLE:        6          FALSE                                                                        PURPLE+#17BLACK:          -7         TRUE                                                                          @BLACKPINK:              9          TRUE                                                                          #WHITEWHITE:          5          FALSE                                                                        =0CDEFGWHITE,X22=0X5APINK+#3=0X5APINK–#3@#25+RED=0C5A#7 When a symbol is encountered its attribute values are determined by looking up the symbol in the symbol table. EXPRESSIONS EXPRESSION            VALUE           RELOCATABLE       N-Bit               I-Bit                X-BitRED                            13                    RELATIVE                 1                      1                      0PURPLE+#17             23                    ABSOLUTE               1                      1                      0@BLACK                   -7                     RELATIVE                 1                      0                      0#WHITE                     5                      ABSOLUTE               0                      1                      0WHITE,X                   5                      ABSOLUTE               1                      1                      1#22                              22                    ABSOLUTE               0                      1                      0PINK+#3                     12                    RELATIVE                 1                      1                      0PINK–#3                     6                      RELATIVE                 1                      1                      0@#25+RED                38                    RELATIVE                 1                      0                      0#7                                7                      ABSOLUTE               0                      1                      0 LITERAL TABLE NAME                        VALUE                       LENGTH                    ADDRESS=0CDEFG                   44454647                    4                                  1=0X5A                        5A                               1                                  2=0C5A                        3541                            2                                  3 Notes:

$25.00 View

[SOLVED] Csc 354 – assignment #1 write a complete module used to maintain the symbol table for the sic/xe assembler

Write a complete module used to maintain the symbol table for the SIC/XE assembler:Write a complete main/driver program that uses the symbol table module to process two text files: Basic Algorithm SYMBOL (also referred to as a label in assembly language programing) VALUE RFLAG (Boolean) IFLAG (Boolean) MFLAG (Boolean) Sample Program Run  Step #1 – SYMS.DAT                       //  File names are case sensitive in Linux as well as some languagesABCD:     50     True                          //  Valid – insert ABCD and all attributes into symbol table  (*)B12_34:     -3     false                    //  Valid – insert B12_ and all attributes into symbol table  (*)a1B2_c3_D4:   +45   true                 //  Valid – insert A1B2 and all attributes into symbol table  (*)ABCD!:    33     true                      //  ERROR – symbols contain letters, digits and underscore:  ABCD!1234567890:     0     false                   //  ERROR – symbols start with a letter:  1234567890ABCD_EF:  +100  TRUE        //  ERROR – symbol previously defined:  ABCD  (+)a1234:  3.5   FALSE                           //  ERROR – symbol a1234 invalid value:  3.5XYZ:     100     5                            //  ERROR – symbol XYZ invalid rflag:  5 (*) no message displayed for valid symbols with valid attributes – set IFLAG to true – set MFLAG to false(+) set MFLAG attribute to true for symbol ABCD Step #2 – search fileABCD                                                 //  Found – display symbol ABCD and all attributesA1b2C3_xYz                                 //  Found – display symbol A1B2 and all attributesCDEF                                                  //  ERROR – CDEF not found in symbol tableabc~def                                     //  ERROR – symbols contain letters, digits and underscore:  abc~defa1b2c3d4e5f6                                     //  ERROR – symbols contain 10 characters maximum:  a1b2c3d4e5f6 Step #3 – view the symbol table – required output order and format Symbol            Value  RFlag  IFlag    MFlag             //  Do not allow the data to scroll off of the screen//  Hold the output every 20 lines – Tera Term screen sizeA1B2               45        1          1          0                      //  Continue when user indicates to do soABCD             50        1          1          1B12_               -3         0          1          0                      //  Perform an inorder traversal of symbol table Notes and Suggestions Other Requirements

$25.00 View

[SOLVED] Csci 4131 – internet programming homework assignment 5 – introduction to node.js

The objective of this assignment is to introduce web-server development with Node.js. We will provide most of the client-side code and some of the server-side code for this assignment to you, and you are required to add/complete certain functions to complete the assignment. Node.js is basically JavaScript running a Web-server. It uses an event-driven, non-blocking I/O model. So far, in this course we have used JavaScript for client-side scripting. For this assignment, we will use JavaScript for server-side scripting. Essentially, instead of writing the server code in Python like in HW4, we will develop a basic web-server using JavaScript. In this assignment, use FETCH and manipulate the Document Object Model of the Webpage making the FETCH request. FETCH is used on the client-side to create asynchronous web applications. As discussed in class and the assigned reading, it is an efficient means of requesting data from the server, receiving data from the server, and updating the web page without reloading the entire web-page. There are 10 pages in this assignment specification. 2 Preparation and Provided Files I. The first step will be to get Node.js running on CSE lab machines or your personal machine. This can be accomplished on CSE lab machines as follows: 1. Log into a CSE lab machine. This can be done with VOLE or SSH. 2. Open the terminal and type the following command to add the Node.js module: module add soft/nodejs 3. The next step is to check the availability of Node.js. Type the following command to check the version of Node.js on the machine: node -v 4. This will display the current installed version if node has loaded correctly. 2 II. The second step is to create a Node.js project for this assignment as follows: Open a terminal on a CSE lab machine, then: 1. Create a directory named by typing the following command: mkdir yourx500id_hw05 2. Go inside the directory by typing the following command: cd yourx500id_hw05 3. Having a file named package.json in Node.js project makes it easy to manage module dependencies and makes the build process easier. To create package.json file, type the following command: npm init 4. This will prompt you to enter the information. Use the following guideline to enter the information (The things that you need to enter are in bold. Some fields can be left blank.): package name: (yourx500id_hw05) yourx500id_hw05 version: (1.0.0) description: Assignment 5 entry point: (createServer.js) (We will provide an createServer.js file for your use) test command: git repository: keywords: author: yourx500id license: (ISC) 5. After filling in the above information, you will be prompted to answer the question: “Is this ok? (yes)”. Type yes and hit enter. 6. Now copy all the files present that are provided for this assignment to this directory: yourx500id_hw05 7. Listing (tree) all the available files in your HW5 directory should display similar to the following: 8. The project setup is now complete, and you are ready to start the server. 3 III. To start the server, type the following command: node createServer.js This starts the server and binds it to port 9001. Now, using in your browser’s URL bar (i.e., address bar), type: http:// localhost:9001 The following page should be displayed (below, and shown again in the screenshots below): The following files are provided for this assignment: 1. createServer.js: This file contains the partially complete code for the node.js server. 2. client/index.html: Home page for this application. 3. client/schedule.html: Page which displays the list of events for a day. ○ You need to fill in the TODO which would send a GET request to the Node.JS server via FETCH, requesting that server reads and returns the data in the file schedule.json and then add the JavaScript or jQuery necessary to dynamically add the data to display a table on the schedule.html page. 4. client/addEvent.html: Form to add details about new events. ○ When the form is submitted it will send a POST request with the data entered on the form to your Node.JS server. 5. schedule.json: This file contains lists of events in JSON format, separated by day of occurence. 3 Functionality Note: We advise you to complete the code changes for the server before changing the code for the client. All the server endpoints (APIs) can be tested using POSTMAN or CURL. The functionality for the client and server is specified on the following pages 4 Client All the resources related to the client have been provided in the client folder. The client folder has four HTML files (index.html, schedule.html, and addEvent.html). schedule.html has a table (id=scheduleTable) whose body is empty. You need to add code to the TODO section that dynamically populates the contents of the table after getting the list of events (a string containing the items in the table in JSON format) from the server. You need to implement the following functionality in schedule.html file: 1. Request a list of a day’s event entries from the getSchedule endpoint of your Node.js server using FETCH with the GET method. 2. Upon successful completion of the asynchronous FETCH GET request, your Node.js server will return the list of event entries. 3. Use the response returned to dynamically add rows to the table with the id scheduleTable present in schedule.html page (Create a JSON object out of the list returned and then build/render an HTML table to display the entries in the schedule). Note the format of each column in the provided images, notably that info contains a link to the url. 4. You can use jQuery, JavaScript, or a mix of both to achieve this. Server When the server starts, it listens for incoming connections on port 9001. This server is designed to handle only GET and POST requests. GET requests: 1. The server has been designed to serve three different HTML pages to clients: index.html, schedule.html, addEvent.html. 2. The server can also read and write to the list of event entries (in JSON format) by accessing (reading from and writing to) the file: schedule.json. 3. GET request for the index.html: The code for this has already been provided to you in createServer.js file where the server is listening on the endpoint / and /index.html. You do not need to add any code for this. 4. GET request for the schedule.html page: a. When the Schedule Tab is clicked on the browser, a request is sent to the server to fetch the schedule.html file. b. You need to write code in createServer.js to listen for requests to the Server’s endpoint /schedule.html and return the file client/schedule.html to the client 5 5. GET request to getSchedule: a. You need write code to listen on an endpoint for the GET request from schedule.html (the request will be seeking the contents of the schedule.json file for a given day) b. You need to write code in createServer.js to read json data from the day in the schedule.json file and return the json data to schedule.html (which will then be parsed and displayed by schedule.html in table format) when a day is selected. c. Your server should only return a single day’s events for any request. The filtering should be done in createServer.js (it should not be done on the front end/client). d. The events must be displayed in ascending order on the events’ start time. 6. GET request for the addEvent.html page: a. When the Add Event Tab button is clicked on the browser, a request is sent to the server to fetch the addEvent.html file. b. You need to write code in createServer.js to listen for requests to the endpoint /addEvent.html and return the file: client/addEvent.html to the requesting client. 7. GET request for any other resource: If the client requests any resource other than those listed above, the server should return a 404 error. The implementation is already provided in the file createServer.js that we’ve provided to you. 8. POST requests: ● The server should process the form data posted by the client. The form we’ve provided, in the file addEvent.html, enables a user to enter details about a new event and update the list of events. The user enters the Event Name, Day, Start Time, End Time, Phone Number, Location, Extra Information, and URL in the form. ● Details for a few events are pre-populated in the schedule.json file. Your job is to add code that appends the details of a new event sent via a POST of the data entered on the form to this file and redirect the user to the schedule.html page after successful addition of the new event. This information must be maintained in sorted order by start time. ● To accomplish this, your server needs to listen for requests to the /postEventEntry endpoint for a POST request from the addEvent.html file. ● You need to write code to i. read the data “posted” (i.e., the data the user has entered on each field) to the form) ii. add the new information to schedule.json file in sorted order iii. redirect the file schedule.html. iv. The code for redirection is 302. 6 Please ensure that the newly added data does not change the format of the schedule.json file (i.e. there are no new fields added to Name through Extra Information or existing fields removed). Tasks for bonus (Hints, it is a bonus!!! – 1 more hint in Evaluation Section) Add a new endpoint to your server eventInterferes which responds to a GET request of a potential new event. This must take a day, start, and end time. This will return a potentially empty list of events which occur within the new event’s time. 1. The frontend must have a new button in addEvent.html which sends a request to the eventInterferes endpoint with day, start, and end time information. a. If no events interfere – a new html element appears on the html page signifying no interference. b. If any events interfere – a list of events which interfere must be displayed in a new html element. 7 4 Screenshots index.html (Should be displayed when you type: http:// localhost:9001 in your browser’s URL bar after starting your Node.js server) – change port on server if running on CSELabs machine Initial display for schedule.html (displayed when user selects schedule menu item) 8 Selection of day in schedule.html Add details for a new event (Form displayed when Add Event is selected – and we have filled out the form). 9 schedule.html page after adding a new event (after completed form above is submitted with the information shown in the middle row of the events displayed below) addEvent.html when events intersect with an existing one. 10 addEvent.html when no events intersect with an existing one. 5 Submission Instructions Zip or Tar your entire project directory – and the name of the zipped folder should be your Your_x500id_hw05. 6 Evaluation Your submission will be graded out of 100 points on the following items: 1. schedule.html is successfully returned by the server (15 points). 2. addEvent.html is successfully returned by the server (15 points). 3. Client successfully gets the list of events from the server. The events are dynamically added to the table present in the schedule.html page. (30 points) 4. POST endpoint successfully adds the details of the new event entry to schedule.json file (30 points). 5. User is redirected to the schedule.html page after successful addition of a new event (10 points). 6. Bonus: addEvent.html has the required functionality to gather intersection information from the server. (10 points bonus). Hint: We use the node.js moment module to check for interferences!

$25.00 View