Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] Cmpt 475 assignment 3

In this assignment, you will apply Feature-Driven Development (FDD) requirements engineering to the same application as in the previous 2 assignments.Taking all user stories you collected from your interviewees in Assignment 1, and assuming you are doing all the FDD roles yourself (chief architect, chief programmer, class owner, project manager, development manager, etc.), perform the following using FDD terminology and practices:a) Develop an overall model. Specifically, you need to produce the model shape and the informal features list. b) Build a detailed features list, consisting of prioritized and estimated major feature sets, feature sets, and features.c) For 20% bonus: plan the project, showing which features will be done in which iteration. You don’t need to assign classes to owners: simply use each feature’s estimated duration and priority to plan the schedule.Notes: 1. You will notice that, for some user stories, you need to break down the story into multiple features. Also, although less likely, it could happen that you might need to combine multiple user stories into one feature.2. Recall that the size of user stories in XP is between 1 to 3 weeks, whereas the size of features in FDD must be 2 weeks or less.3. There are no design by feature, build by feature phases in this assignment. The assignment stops at the end of the build a features list phase, or the plan by feature phase if you choose to do the bonus part.

$25.00 View

[SOLVED] Cmpt 475 assignment 2

In Assignment 1, you came up with an estimated and ordered top-20 list of user stories. For this assignment: a) Take the top 10 of those user stories and break each one into tasks. In Scrum, a task should take a day or less. In Extreme Programming (XP), a task should take 1, 2, or 3 days (additions of ½ can be used if needed). Choose one of Scrum or XP, and create your tasks accordingly.b) Estimate each task, again according to the Agile method used. c) Present the result (user stories, tasks and estimates) as an Indented-List of Work Breakdown Structure (WBS), similar to the format shown below (the project below is totally irrelevant to yours, just use its format):d) Notice in the above WBS format that the estimate for a user story is equal to the sum of estimates for its tasks. Please use the same in your WBS. Then, compare the user story estimates here to the one you submitted for Assignment 1. How much is the average relative error (percentage) for the top 10 user stories if we take the task-level estimates to be the baseline? user story task

$25.00 View

[SOLVED] Cmpt 475/982 assignment 1

In this assignment you will practice requirement elicitation. Assume you are part of an Agile development team using extreme programming to build a metaverse office meeting software. Many people are familiar with the concept of an in-person office meeting, having attended such meetings themselves.The software is supposed to allow such meetings to be attended in a metaverse in 3D. Imagine having to attend an important meeting, but you can’t do so in person due to travel restrictions. Of course we have Zoom and the like, but how cool would it be to attend a meeting in 3D in a metaverse if it provided the same experience as an in-person meeting, and then some? Software can do almost anything, so let’s make this happen!Interview 4 people (you cannot use your CMPT 475/982 classmates) covering different age and gender demographic, and find out what they would expect such a software to provide. What are the minimum features, less than which the interviewee would not attend virtually? What are some cool breakthrough features that even in-person meetings cannot provide, but we can provide them virtually? That would surely give the software an edge.Work with each of the 4 people and capture user stories. Remember that the customer must write the user story, not you, although you are allowed to help them clarify the user story. Once the list is ready, the customer must rank them in terms of importance (ordering user stories). Include this ranked list for each interviewee in an Appendix to your solution. You must keep the people anonymous but include each person’s age and gender. Do not include any other personal info.You shall then analyze all user stories by all 4 people, and come up with a list of top 20 user stories that are based on the most common user stories among the interviewees. Some user stories by the different interviewees might be very similar, although each interviewee might write it differently or rank it differently.Finally, order the top 20 list based on the average ordering of the interviewees, and estimate each user story for development time. If a user story is estimated to take less than a week or more than 3 weeks, follow what we learned in the course to bring it within that range. User Stories must be multiples of 0.5-week units. Include this ranked and estimated top 20 list as your final answer. The list should be traceable to the lists in the Appendix, showing which interviewee user stories have led to a user story in the final top 20 list. Here is a sample template for this top 20 list: ID User Story Estimate (weeks) Traceability 1 As a virtual meeting attendee, I want to see the job title of every attendee in their organization, so I get a better contextual understanding when they speak. 1 Interviewee 1 Story 5 Interviewee 3 Story 10 Interviewee 4 Story 2

$25.00 View

[SOLVED] Cmpt412 project 3 object detection, semantic segmentation, and instance segmentation

The goal of this assignment is to get hands-on experience designing and training deep convolutional neural networks using PyTorch and Detectron2. Starting from a baseline config file and model, you will improve an object detection framework to detect planes in aerial images. You will evaluate the performance of your code by uploading your predictions to this Kaggle competition (Will launch later). Note that this assignment is SIGNIFICANTLY more challenging than the previous assignments. Please start early.Most instructions are the same as before. Here we only describe different points. 1. Please upload a pdf ({Your-SFUID}.pdf) and a zip package to Canvas as before. The zip package must contain the following in the following layout. data folder is large for this project. Please do not include the data folder. ○ {SFUID}/ ■ lab3.ipynb (remove code output from the notebook, otherwise the file will be too large to be uploaded) ○ In addition, the CSV file of your predicted test labels needs to be uploaded to Kaggle.The goal of this assignment is to get hands-on experience in training deep convolutional neural networks using PyTorch targeting three fundamental tasks of computer vision, object detection, semantic segmentation, and instance segmentation. Starting from a baseline configs, you will design an improved deep net framework to detect planes in aerial images and obtain the segmentation mask of each plane. You will evaluate the performance of your architecture by uploading your predictions to this Kaggle competition. One is expected to search online, read additional documents referred to in this hand-out, and/or reverse-engineer template code.In this assignment, you will use PyTorch, which is currently one of the most popular deep learning programming frameworks and is very easy to pick up. It has a lot of tutorials and an active community answering questions on its discussion forums.Implementing a powerful object detection from scratch is hard and time consuming work. However, there are several open source frameworks in this regard which made it much easier to train and test the current famous detectors. In this assignment, you will use Detectrone2 which is powered by PyTorch. You can find the full documentation in this link. Part 1 has been adapted from a Detectron2 Beginner’s Tutorial.Google Colab Setup You will be using Google Colab as before, a free environment to run your experiments. If you have your own GPU and deep learning setup, you can also use your computers. If you choose Google Colab, here are instructions on how to get started: 1. Open Colab, click on ‘File’ in the top left corner and select upload ‘New Notebook’. Upload the notebook (lab3.ipynb) file. 2. In your Google Drive, create a new folder, for example, “CMPT_CV_lab3”. This is the folder that will be mounted to Colab. All outputs generated by Colab Notebook will be saved here.3. Within the folder, create a subfolder called ‘data’ and put the corresponding data files. First, copy train.json into the ‘data’ folder. Next, you need to copy many files under 2 directories named “train” and “test”. Since there are too many files, instead of downloading files and uploading, it is highly recommended to use (shortcuts) instead. Open the following 2 links (train/ and test/), click their names, choose “Add shortcut to Drive”, then specify your “CMPT_CV_lab3/data”.4. If you have a fast Internet connection, you can download and upload data files here. 5. It seems that reading the files directly from the google drive will highly affect your training procedure time, please read link1 and link2 in this regard to speed up your training.6. Follow the instructions in the notebook to finish the setup. This lab does not have a zip project_package file. All the links are in this hand-out. If one wants to run things locally, this link is the package file with the data. Keep in mind that you need to keep your browser window open while running Colab. Colab does not allow long-running jobs but it should be sufficient for the requirements of this assignment (expected training time is about 20 minutes for the baseline).The training of deep learning frameworks can take several hours to several months in different settings. Therefore, writing an efficient code is very important. In addition, you need to be careful when you are running your code in Colab as any disconnection will close your session. In this case, you need to run the initializations and import the packages again before going to other parts.Make sure that you get benefits from all the options, for example, you can run all cells together by selecting “run all” option from the menu, or you can select multiple cells and just run them which is very helpful in case that you already trained your model and you want to skip the training parts and use checkpoints instead.The given notebook is a template for helping you, but it is not necessarily the most efficient implementation. Feel free to change the code in case of improving the efficiency, but remember to mention the idea behind the modifications to the given parts in your report.Dataset For this part of the assignment, you will be working with the iSAID dataset (the above train/test folders contain data and you need not download data from iSAID website). This dataset consists of 655,451 object instances for 15 categories across 2,806 high-resolution images.We have modified the standard dataset to create our own, which consists of 198 training images and 72 test images with just 1 category (Plain). The training dataset has labels for your training, and your trained model will predict answers for the test set for evaluation. For your better final performance, you should split your training data into the training set and the validation set, then tune your hyper-parameters as in the lecture. The results on the test set (in CSV format) need to be uploaded to the Kaggle competition to know your final performance. Note that the number of submissions to Kaggle is limited to a few times per day, so try to tune your model before uploading CSV files. Also, you must make at least one submission for your final system. The best performance will be considered.You need first to write a data loader ”get_detection_data” similar to ”get_balloon_dicts” in the mentioned tutorial. This function processes the given dataset and returns a python list. The difference of the function inputs is that instead of the ’img_dir’ your function should get ’set_name’ (“train”, “test”, and optionally “val”) as the input and process the corresponding data set. For the test set, you can put a condition that ignores the json file as ‘test.json’ does not exist, in this case, the annotations will be an empty list for the test images and the other values like height and width could be obtained from the image file . You can also divide your training data to training and validation for your experiments.For more details, please read this document regarding each keyword (like BoxMode). After getting the dictionary from the function, remember to register your data and metadata in the DatasetCatalog. Finally, visualize 3 random samples of the training data using “detectron2.utils.visualizer” to make sure that the data is correct. This visualization is not required in your submission.Note that some planes are missing in the annotations. In order to check if this is because of the dataset or your code, you can manually open the JSON file and check the number of bounding boxes for that image. Of course, the missing annotations in your visualization is ok and we do not consider it as a bug of your code.Regarding the missing planes in the dataset, your network still should be able to overcome these noises and get the baseline result. However, the missing annotations could affect the final result. If you want to improve it, you can manually add the planes to the dataset or use other learning-based methods to solve the noisiness of the data.Configs There is a large collection of baselines trained with detectron2 which you can find in Detectron2 Model Zoo in addition to their config files and the pretrained models. For this part of the assignment, we expect you to use “faster_rcnn_R_101_FPN_3x.yml” as the baseline config that you can use to run and get a baseline result with the following changes on the configs: (MAX_ITER = 500, BATCH_SIZE_PER_IMAGE = 512, IMS_PER_BATCH = 2, BASE_LR = 0.00025). Modify these values to improve your results.Training and Evaluation You need to create an output directory and train the detector using a “DefaultTrainer” and the new configs. The training of the baseline should take about 20 minutes. Before that, create an output folder in the same directory to save the trained model and corresponding files.After training, use “COCOEvaluator“, “build_detection_test_loader”, and “SCORE_THRESH_TEST = 0.6” to evaluate your model on the training/validation data. Consider the Average Precision (AP) at IoU=.50 (similar to PASCAL VOC) as the target metric. This value probably will be around 0.250 for the baseline without any improvement. Finally, visualize 3 random samples of the test data as well as saving the output file of “coco_instances_results.json”.Improve your model As stated above, your goal is to create an improved object detector by making framework and config choices. A good combination of frameworks and configs can highly improve your accuracy. For improving the detector, you should consider all of the following. 1. Data Processing. The given images are in very high resolutions. It is not a good idea to directly use this image, because planes would appear very small in the input that is passed to your ConvNet. So one way is to divide an image into smaller blocks, then pass each image block to ConvNet for training. Given a block, you need to look at the ground-truth bounding box information and keep only those that are inside the block. This means that at test time, given a high resolution image, you need to divide into blocks, and pass each block to ConvNet then merge resultant bounding boxes into the same file by some coordinate transformation.There are a lot of degrees of freedom here. You may wonder what is a good block size as planes may appear at different sizes depending on the images. Then, a natural idea is to try different block sizes. You may also wonder what to do if ground-truth bounding boxes are partially inside the block. What you should do is determined by the performance of the system. There is no correct answer here. You should explore any ideas you might have.2. Data augmentation. Try using different transforms by writing custom Data Loaders. 3. Object Detection Method. Following the models in the MODEL_ZOO page. There are several options for choosing the method (Faster R-CNN, RetinaNet, RPN) in addition to several options of architecture for each method. Considering the cons and pros of each method in addition to the training times. You need to pick the best one suitable for the task. You can find more info on different object detection methods in the following links: [Link1. Link2]4. Pretrained Models. You can also try to use the pretrained models which are provided in MODEL_ZOO and try to freeze different layers. Finally, there are a lot of approaches to improve a model beyond what we listed above. Feel free to try out your own ideas, or interesting ML/CV approaches you read about. Since Colab makes only limited computational resources available, we encourage you to rationally limit training time and model size.5. Hyperparameters tuning. You can use any values for your parameters like learning-rate, number of epochs, etc. Just remember to mention the changes in your report.The goal of this part is to implement and train a deep neural network for image segmentation task. Dataset You will work with the same dataset for this part by also considering the “segmentation” key regarding each plane in train.json. Each segmentation is provided with a list of pixel coordinates. You need to convert them in order to obtain the corresponding input image and ground-truth mask of each plane. Convert all of the masks and cropped images to a fixed size (e.g. 128*128) to pass them to the network.Network We have provided an implementation of a sample network for the training. You need an Encoder to encode the features of the image and a Decoder to generate the new image (segmentation mask in here) from the encoded features. In this regard, the provided code consists of 3 different modules: conv, down, and up respectively as the conv layer, conv layer with max-pooling and conv layer with upsampling. MyModel class is predefined to use these layers.You need to modify the network to improve the performance. The current network consists of only 4 layers, therefore, one way is to increase the number of layers of both encoder and decoder. One other possible way is to use skip connections between the layers. You can add the connections by modifying the above modules.Loss Function There are several loss functions for image segmentation (you can see a sample list in here). As the baseline, the Binary Cross Entropy Loss with logits should be working as we only have one class of objects in the dataset. You can also add other loss functions to improve your results.Training The baseline optimizer for the notebook is SGD with the learning rate of 1e-3. However, based on your network, you may need to change the optimizer and hyperparameters such as learning rate to find the best configuration for your model. Note that picking a good optimizer and learning rate can highly affect the total training time of your network.Evaluation For this part, you need to obtain the Intersection-over-Union (IoU) of the ground truth mask and the predicted output as the metric. Evaluate the IoU of all the instances and report the average of them as the final score of your model. Visualize 3 samples from the test images by manually cropping the planes or using the results of the detector of part 1. You do not have the ground-truth segmentation mask, so the mask is not required.Having both the detection and the segmentation modules give us the opportunity to also have the instance segmentation results for the dataset. In this regard, you need only to replace the output of your trained object detector instead of the given ground-truth bounding boxes in Part2 and combine the instances of each image to visualize the predicted instance segmentation. We have provided the required functions to convert predicted instance segmentation masks to a CSV file. Visualize 3 samples from the test set of your results (the ground truth mask is not required) and submit the CSV file to Kaggle. Instances could have different intensity as it is shown in the figures, however, using different colors for visualization is recommended.A valid submission to Kaggle with higher accuracy than our baseline submission in the private leaderboard (with dice coefficient as the metric) and the visualization of the samples in the report will get 6 marks. The public and private scores of the baseline are very close. The rest of the mark is based on the private leaderboard, which will be published after the deadline. The following table shows the markings based on the relative rankings on the private leaderboard.● Top 10% : 7 pts ● Top 30% : 6 pts ● Top 50% : 5 pts ● Top 70% : 4 pts ● Top 90% : 2 pts ● Rest : 0 ptsSuppose 73 students submitted, then top 10% mean that the ranking must be better than or equal to the 7th to be given 7pts. Note that the marking is based on the private leaderboard which will be evaluated by the test results and will be published after the deadline, before that, you can see the public leaderboard based on the training data. We will take the snapshot of the ranking 3 days after the due-date timing and calculate the points. Please report your number in the public leaderboard at the time of submission and also the ID which you use in Kaggle.In this part, you are supposed to use the implemented version of Mask R-CNN similar to the detection part from the Detectrone2 and compare with your results of Part 3. Use “mask_rcnn_R_50_FPN_3x.yaml” for your configs as well as the same tricks that you did to improve the results of your detector.● Provide the same visualization and evaluation for this part and compare the results. ● Explain the cons and pros of each method. ● Check the detection results with the results of Part 1. How much are the results different? Explain which one you think is better and why?Kaggle Submission Running Part 3 in the Colab notebook creates a pred.csv file in your Google Drive. The csv file needs to be uploaded to Kaggle. A few useful resources ● Detectron2 Beginner’s Tutorial ● Creating shortcuts to your google drive ● Full documentation of Detectrone2 in this link. ● This post is a helpful resource on understanding semantic segmentation with U-Net. ● Colab implementation of U-Net using pytorch. ● This post explained the difference between segmentation-mask representations. ● An explanation of Run Length Encoding (RLE) scheme for COCO annotations. ● https://arxiv.org/abs/1505.04597 – U-Net: Convolutional networks for biomedical image segmentation. ● Metrics for semantic segmentationTips ● Above images are just examples to represent the target task, try your best to visualize the results. ● All edits and configs which lead to a significant accuracy improvement must be listed in the report. You must include at least one ablation study for such an edit (or a combination of edits), that is, submitting results with and without the edit to Kaggle, and reporting the performance improvement.Submission Checklist ● Part 1: ○ List of the configs and modifications that you used. ○ Factors which helped improve the performance. Explain each factor in 2-3 lines. ○ Final plot for total training loss and accuracy. This would have been auto-generated by the notebook.○ The visualization of 3 samples from the test set and the predicted results. ○ At least one ablation study to validate the above choices, i.e., a comparison of performance for two variants of a model, one with and one without a certain feature or implementation choice. In addition, provide visualisation of a sample from the test set for qualitative comparison.● Part 2: ○ Report any hyperparameter settings you used (batch_size, learning_rate, num_epochs, optimizer). ○ Report the final architecture of your network including any modification that you have for the layers. Briefly explain the reason for each modification. ○ Report the loss functions that you used and the plot the total training loss of the training procedure ○ Report the final mean IoU of your model. ○ Visualize 3 images from the test set and the corresponding predicted masks.● Part 3: ○ The name under which you submitted on Kaggle. ○ Report the best score (should match your score on Kaggle). ○ The visualisation of results for 3 random samples from the test set. ○ CSV file of your predicted test labels needs to be uploaded to Kaggle.● Part 4: ○ The visualisation and the evaluation results similar to Part 1. ○ Explain the differences between the results of Part 3 and Part 4 in a few lines.

$25.00 View

[SOLVED] Cmpt412 project 2 deep learning by pytorch

The goal of this assignment is to get hands-on experience designing and training deep convolutional neural networks using PyTorch. Starting from a baseline architecture we provided, you will design an improved deep net architecture to classify (small) images into 100 categories.You will evaluate the performance of your architecture by uploading your predictions to this Kaggle competition (https://www.kaggle.com/c/sfu-cmpt-image-classification-2021-fall). Kaggle allows one to create in-class competition website with auto ranking and evaluation (https://www.kaggle.com/c/about/inclass/overview).Most instructions are the same as before. Here we only describe different points. 1. Please upload a pdf ({Your-SFUID}.pdf) and a zip package to Canvas as before. The zip package must contain the following in the following layout. data folder is large for this project. Please do not include data folder. ○ {SFUID}/ ■ lab2.ipynb ○ In addition, CSV file of your predicted test labels needs to be uploaded to Kaggle.The goal of this assignment is to get hands-on experience designing and training deep convolutional neural networks using PyTorch. Starting from a baseline architecture we provided, you will design an improved deep net architecture to classify (small) images into 100 categories. You will evaluate the performance of your architecture by uploading your predictions to this Kaggle competition (https://www.kaggle.com/c/sfu-cmpt-image-classification-2021-fall).Note that the amount of coding in this assignment is a lot less again. We will not provide detailed instructions, and one is expected to search online, read additional documents referred in this hand-out, and/or reverse-engineer template code.In this assignment you will use PyTorch, which is currently one of the most popular deep learning frameworks and is very easy to pick up. It has a lot of tutorials and an active community answering questions on its discussion forums. Part 1 has been adapted from a PyTorch tutorial on the CIFAR-10 dataset. Part 2 has been adapted from the PyTorch Transfer Learning tutorial.Google Colab Setup You will be using Google Colab, a free environment to run your experiments. If you have your own GPU and deep learning setup, you can also use your computers. If you choose Google Colab, here are instructions on how to get started: 1. Open Colab, click on ‘File’ in the top left corner and select upload ‘New Python 3 Notebook’. Upload a notebook (.ipynb) file in the code package. 2. In your Google Drive, create a new folder, for example, “SFU_CMPT_CV_lab2”. This is the folder that will be mounted to Colab. All outputs generated by Colab Notebook will be saved here.3. Within the folder, create a subfolder called ‘data’. Upload data files (cifar100.tar.gz, train.tar.gz, test.tar.gz), which you can download here. 4. Follow the instructions in the notebook to finish the setup. Keep in mind that you need to keep your browser window open while running Colab. Colab does not allow long-running jobs but it should be sufficient for the requirements of this assignment (expected training time is about 5 minutes for Part 1 and 20 minutes for Part 2 with 50 epochs).Dataset For this part of the assignment, you will be working with the CIFAR100 dataset (already loaded above). This dataset consists of 60K 32×32 color images from 100 classes, with 600 images per class. There are 50K training images and 10K test images. The images in CIFAR100 are of size 3x32x32, i.e. 3-channel color images of 32×32 pixels.We have modified the standard dataset to create our own CIFAR100 dataset which consists of 45K training images (450 of each class), 5K validation images (50 of each class), and 10K test images (100 of each class). The training and val datasets have labels while all the labels in the test set are set to 0. You can tune your model on the validation set and obtain your performance on the test set by uploading a CSV file to this Kaggle competition. Note that the number of submissions is limited to a few times per day, so try to tune your model before uploading CSV files. Also, you must make at least one submission for your final system. The best performance will be considered.BaseNet We created a BaseNet that you can run and get a baseline accuracy (~23% on the test set). The starter code for this is in the BaseNet class. It uses the following neural network layers: ● Convolutional, i.e. nn.Conv2d ● Pooling, e.g. nn.MaxPool2d ● Fully-connected (linear), i.e. nn.Linear ● Non-linear activations, e.g. nn.ReLU ● Normalization, e.g. nn.batchnorm2dBaseNet consists of two convolutional modules (conv-relu-maxpool) and two linear layers. The precise architecture is defined below (A typo in the input/output dimension for layer 6. It should be 10 | 5 instead of 5 | 5): Your goal is to edit the BaseNet class or make new classes for devising a more accurate deep net architecture. In your report, you will need to include a table similar to the one above to illustrate your final network.Before you design your own architecture, you should start by getting familiar with the BaseNet architecture already provided, the meaning of hyper-parameters and the function of each layer. This tutorial by PyTorch is helpful for gearing up on using deep nets. Also, this lecture on CNN by Andrej Karpathy is a good resource for anyone starting with deep nets. It talks about architectural choices, output dimension of conv layers based on layer parameters, and regularization methods. For more information on learning rates and preventing overfitting, this lecture is a good additional read.Improve your model As stated above, your goal is to create an improved deep net by making judicious architecture and implementation choices. A reasonable combination of choices can get your accuracy above 50%.For improving the network, you should consider all of the following. 1. Data normalization. Normalizing input data makes training easier and more robust. Similar to normalized epipolar geometry estimation, data in this case too could be made zero mean and fixed standard deviation (sigma=1 is the to-go choice). Use transforms.Normalize() with the right parameters to make the data well conditioned (zero mean, std dev=1) for improved training. After your edits, make sure that test_transform has the same data normalization parameters as train_transform.2. Data augmentation. Try using transforms.RandomCrop() and/or transforms.RandomHorizontalFlip() to augment training data. You shouldn’t have any data augmentation in test_transform (val or test data is never augmented). If you need a better understanding, try reading through PyTorch tutorial on transforms.3. Deeper network. Following the guidelines laid out by this lecture on CNN, experiment by adding more convolutional and fully connected layers. Add more conv layers with increasing output channels and also add more linear (fc) layers. Do not put a maxpool layer after every conv layer in your deeper network as it leads to too much loss of information.4. Normalization layers. Normalization layers help reduce overfitting and improve training of the model. Pytorch’s normalization layers are an easy way of incorporating them in your model. Add normalization layers after conv layers (nn.BatchNorm2d). Add normalization layers after linear layers and experiment with inserting them before or after ReLU layers (nn.BatchNorm1d).5. Early stopping. After how many epochs to stop training? This answer on stackexhange is a good summary of using train-val-test splits to reduce overfitting. This blog is also a good reference for early stopping. Remember, you should never use the test-set in anything but the final evaluation. Seeing the train loss and validation accuracy plot, decide for how many epochs to train your model. Not too many (as that leads to overfitting) and not too few (else your model hasn’t learnt enough).Finally, there are a lot of approaches to improve a model beyond what we listed above. Feel free to try out your own ideas, or interesting ML/CV approaches you read about. Since Colab makes only limited computational resources available, we encourage you to rationally limit training time and model size.Kaggle Submission Running Part 1 in the Colab notebook creates a plot.png and submission_netid.csv file in your Google Drive. The plot needs to go into your report and the csv file needs to be uploaded to Kaggle. Tips ● Do not lift existing code or torchvision models. ● All edits to BaseNet which lead to a significant accuracy improvement must be listed in the report. You must include at least one ablation study for such an edit (or a combination of edits), that is, submitting results with and without the edit to Kaggle, and reporting the performance improvement. Grading Scheme ● Include a table illustrating your final network and describe what it is [ 2 pts ] ● Include a plot.png from Colab notebook, illustrating the training loss and the validation accuracy. [ 1 pts ] ● Include at least one ablation study, reporting the performance improvement. [ 1 pts ] ● Base performance [ 3 pts ]. One receives (3, 2, 1, or 0 pts), if the best accuracy from your network is above or equal to (50, 45, 40, or 35 %), respectively. ● Relative performance [ 3 pts ]. Given N submissions, one receives (3, 2, 1, or 0 pts), if your ranking is above or equal to (75, 50, 25, or 0 percentile position), respectively. For example, given 100 submissions, the top 25 submissions will receive 3 points. The next 25 submissions will receive 3 points. There may be many late submissions. We will take the snapshot of rankings 5 days after the due date then compute the scores.In this part, you will fine-tune a ResNet model pre-trained on ImageNet for classifying the Caltech-UCSD Birds dataset. This dataset consists of 200 categories of birds, with 3000 images in train and 3033 images in test. Follow the instructions in the notebook and complete the sections marked #TODO. Without changing the given hyperparameters, you should achieve a train accuracy of 15.5%. With slight tweaks to the hyperparameters, you should be able to get a train accuracy above 80%. Try whatever tricks to avoid overfitting and achieve a test accuracy above 55%. One of your architectures must achieve at least 80% training accuracy and at least 55% testing accuracy to get full marks. One must include screenshots of the outputs of the network, showing the accuracies.Experiment with the following at minimum: 1. Vary the hyperparameters based on how your model performs on train in the current setting. You can increase the number of epochs to 50. This should take ~20 mins on Colab. 2. Augment the data similarly to Part 1. 3. ResNet as a fixed feature extractor. The current setting in the provided notebook allows you to use the ResNet pre-trained on ImageNet as a fixed feature extractor. We freeze the weights for all of the network except that of the final fully connected layer. This last fully connected layer is replaced with a new one with random weights and only this layer is trained.4. Fine-tuning the ResNet. To fine-tune the entire ResNet, instead of only training the final layer, set the parameter RESNET_LAST_ONLY to False. 5. Try different learning rates in the following range: 0.0001 to 0.01. 6. Try any tricks or techniques you saw throughout DNN lectures such as drop-out, data augmentation, regularization, and etc. 7. If you’re feeling adventurous, try loading other pre-trained networks available in Pytorch.A few useful resources: ● This Kaggle tutorial is a helpful resource on using pre-trained models in pytorch. ● This post explains how to fine-tune a model in pytorch. ● https://arxiv.org/abs/1403.6382 – trains SVMs on features from ImageNet-trained ConvNet and reports several state of the art results. ● https://arxiv.org/abs/1310.1531 – reports similar findings. ● https://arxiv.org/abs/1411.1792 – studies transfer learning in detail.Submission Checklist ● Part 1: 1. CSV file of your predicted test labels needs to be uploaded to Kaggle. 2. The report should include the following: 1. The name under which you submitted on Kaggle. 2. Best accuracy (should match your accuracy on Kaggle). 3. Whatever was requested in the above ”Grading Scheme” section. 3. lab2.ipynb ● Part 2: 1. Report the train and test accuracy achieved by using the ResNet as a fixed feature extractor vs. fine-tuning the whole network. Include also the screenshots of the accuracies reported by the code. 2. Report any hyperparameter settings you used (batch_size, learning_rate, resnet_last_only, num_epochs). 3. lab2.ipynb

$25.00 View

[SOLVED] Cmpt412 project 1 digit recognition with convolutional neural networks

In this assignment you will implement a convolutional neural network (CNN). You will be building a numeric character recognition system trained on the MNIST dataset. We begin with a brief description of the architecture and the functions.For more details, you can refer to online resources such as http://cs231n.stanford.edu. Note that the amount of coding in this assignment is a lot less than the other assignments. We will not provide detailed instructions, and one is expected to search online and/or reverse-engineer template code.A typical convolutional neural network has four different types of layers.The fully connected or the inner product layer is the simplest layer which makes up neural networks. Each neuron of the layer is connected to all the neurons of the previous layer (See Fig 1). Mathematically it is modelled by a matrix multiplication and the addition of a bias term. For a given input x the output of the fully connected layer is given by the following equation, f (x) = W x + b W, b are the weights and biases of the layer. W is a two dimensional matrix of m × n size where n is the dimensionality of the previous layer and m is the number of neurons in this layer. b is a vector with size m × 1.This is the fundamental building block of CNNs. Before we delve into what a convolution layer is, let’s do a quick recap of convolution. As we saw in our lectures, convolution is performed using a k × k filter/kernel and a W × H image. The output of the convolution operation is a feature map. This feature map can bear different meanings according to the filters being used – for example, using a Gaussian filter will lead to a blurred version of the image. Using the Sobel filters in the x and y direction give us the corresponding edge maps as outputs.Terminology : Each number in a filter will be referred to as a filter weight. For example, the 3×3 gaussian filter has the following 9 filter weights. When we perform convolution, we decide the exact type of filter we want to use and accordingly decide the filter weights. CNNs try to learn these filter weights and biases from the data. We attempt to learn a set of filters for each convolutional layer. In general there are two main motivations for using convolution layers instead of fullyconnected (FC) layers (as used in neural networks).1. A reduction in parameters In FC layers, every neuron in a layer is connected to every neuron in the previous layer. This leads to a large number of parameters to be estimated – which leads to over-fitting. CNNs change that by sharing weights (the same filter is translated over the entire image).2. It exploits spatial structure Images have an inherent 2D spatial structure, which is lost when we unroll the image into a vector and feed it to a plain neural network. Convolution by its very nature is a 2D operation which operates on pixels which are spatially close. Implementation details: The general convolution operation can be represented by the following equation: f (X, W, b) = X ∗ W + b where W is a filter of size k×k×Ci, X is an input volume of size Ni ×Ni ×Ci and b is 1×1 element. The meanings of the individual terms are shown below.In the following example the subscript i refers to the input to the layer and the subscript o refers to the output of the layer. ● Ni – width of the input image ● Ni – height of the input image (image has a square shape) ● Ci – number of channels in the input image ● ki – width of the filter ● si – stride of the convolution ● pi – number of padding pixels for the input image ● num – number of convolution filters to be learntA grayscale image has 1 channel, which is the depth of the image volume. For an image with Ci channels – we will learn num filters of size ki × ki × Ci. The output of convolving with each filter is a feature map with height and width No, where If we stack the num feature maps, we can treat the output of the convolution as another 3D volume/ image with Co = num channels.In summary, the input to the convolutional layer is a volume with dimensions Ni × Ni × Ci and the output is a volume of size No × No × num. Figure 2 shows a graphical picture.Pooling layer A pooling layer is generally used after a convolutional layer to reduce the size of the feature maps. The pooling layer operates on each feature map separately and replaces a local region of the feature map with some aggregating statistic like max or average. In addition to reducing the size of the feature maps, it also makes the network invariant to small translations. This means that the output of the layer doesn’t change when the object moves a little.In this assignment we will use only a MAX pooling layer shown in figure 3. This operation is performed in the same fashion as a convolution, but instead of applying a filter, we find the max value in each kernel. Let k represent the kernel size, s represent the stride and p represent the padding. Then the output of a pooling function f applied to a padded feature map X is given by: Activation layer – ReLU – Rectified Linear UnitActivation layers introduce the non-linearity in the network and give the power to learn complex functions. The most commonly used non-linear function is the ReLU function defined as follows, f (x) = max(x, 0) The ReLU function operates on each output of the previous layer.Loss layer The loss layer has a fully connected layer with the same number of neurons as the number of classes. And then to convert the output to a probability score, a softmax function is used. This operation is given by, p = softmax(W x + b) where, W is of size C × n where n is the dimensionality of the previous layer and C is the number of classes in the problem.This layer also computes a loss function which is to be minimized in the training process. The most common loss functions used in practice are cross entropy and negative log-likelihood. In this assignment, we will just minimize the negative log probability of the given label.Architecture In this assignment we will use a simple architecture based on a very popular network called the LeNet (http://ieeexplore.ieee.org/abstract/document/726791/) ● Input – 1×28×28 ● Convolution – k = 5, s = 1, p = 0, 20 filters ● ReLU ● MAXPooling – k=2, s=2, p=0 ● Convolution – k = 5, s = 1, p = 0, 50 filters ● ReLU ● MAXPooling – k=2, s=2, p=0 ● Fully Connected layer – 500 neurons ● ReLU ● Loss layerNote that all types of deep networks use non-linear activation functions for their hidden layers. If we use a linear activation function, then the hidden layers has no effect on the final results, which would become the linear (affine) functions of the input values, which can be represented by a simple 2 layer neural network without hidden layers. There are a lot of standard Convolutional Neural Network architectures used in the literature, for instance, AlexNet, VGG-16, or GoogLeNet. They are different in the number of parameters and their configurations.Most of the basic framework to implement a CNN has been provided. You will need to fill in a few functions. Before going ahead into the implementations, you will need to understand the data structures used in the code.Data structures We define four main data structures to help us implement the Convolutional Neural Network which are explained in the following section. Each layer is defined by a data structure, where the field type determines the type of the layer. This field can take the values of DATA, CONV, POOLING, IP, RELU, LOSS which correspond to data, convolution, max-pooling layers, inner-product/ fully connected, ReLU and Loss layers respectively. The fields in each of the layer will depend on the type of layer.The input is passed to each layer in a structure with the following fields. ● height – height of the feature maps ● width – width of the feature maps ● channel – number of channels / feature maps ● batch size – batch size of the network. In this implementation, you will implement the mini-batch stochastic gradient descent to train the network. The idea behind this is very simple, instead of computing gradients and updating the parameters after each image, we doing after looking at a batch of images. This parameter batch size determines how many images it looks at once before updating the parameters.● data – stores the actual data being passed between the layers. This is always supposed to be of the size [ height × width × channel, batch size ]. You can resize this structure during computations, but make sure to revert it to a two-dimensional matrix. The data is stored in a column major order. The row comes next, and the channel comes the last.● diff – Stores the gradients with respect to the data, it has the same size as data. Each layer’s parameters are stored in a structure param. You do not touch this in the forward pass. ● w – weight matrix of the layer ● b – bias param_grad is used to store the gradients coupled at each layer with the following properties: ● w – stores the gradient of the loss with respect to w. ● b – stores the gradient of the loss with respect to the bias term.Part 1: Forward Pass Now we will start implementing the forward pass of the network. Each layer has a very similar prototype. Each layer’s forward function takes input, layer, param as argument. The input stores the input data and information about its shape and size. The layer stores the specifications of the layer (e.g., for a conv layer, it will have k, s, p). The params is an optional argument passed to layers which have weights. This contains the weights and biases to be used to compute the output. In every forward pass function, you are expected to use the arguments and compute the output. You are supposed to fill in the height, width, channel, batch size, data fields of the output before returning from the function. Also make sure that the data field has been reshaped to a 2D matrix.In the past, we asked to provide some visualization of results for every single step. However, there is no meaningful visualization until we implement the forward functions of all the layers. Once you implement all the layers, run test_components.m, then copy/paste the visualization results into the report. Those images should look like the following. (test_components.m was provided by courtesy of Matthew Marinets from the class of 2019 Fall at SFU). It is OK that the visualization is not exactly the same.Q 1.1 Inner Product Layer – 1 Pts The inner product layer of the fully connected layer should be implemented with the following definition [output] = inner_product_forward(input, layer, param) Q 1.2 Pooling Layer – 1 Pts Write a function which implements the pooling layer with the following definition. [output] = pooling_layer_forward(input, layer) input and output are the structures which have data and the layer structure has the parameters specific to the layer. This layer has the following fields, ● pad – padding to be done to the input layer ● stride – stride of the layer ● k – size of the kernel (Assume square kernel)Q 1.3 Convolution Layer – 1 Pts Implement a convolution layer with the following definition. [output] = conv_layer_forward(input, layer, param) The layer for a convolutional layer has the same fields as that of a pooling layer and param has the weights corresponding to the layer. Do not worry about a field “group”, which is set to 1 in this assignment.Q 1.4 ReLU – 1 Pts Implement the ReLU function with the following definition. [output] = relu_forward(input)Part 2 Back propagation After implementing the forward propagation, we will implement the back propagation using the chain rule. Let us assume layer i computes a function fi with parameters of wi then final loss can be written as the following.To update the parameters we need to compute the gradient of the loss w.r.t. to each of the parameters. where, hi = fi(wi, hi−1).Each layer’s back propagation function takes input, output, layer, param as input and return param_grad and input_od. output.diff stores the . You are to use this to compute and store it in param_grad.w and to be stored in param_grad.b. You are also expected to return in input_od, which is the gradient of the loss w.r.t the input layer.Q 2.1 ReLU – 1 Pts Implement the backward pass for the Relu layer in relu_backward.m file. This layer doesn’t have any parameters, so you don’t have to return the param_grad structure.Q 2.2 Inner Product layer – 1 Pts Implement the backward pass for the Inner product layer in inner_product_backward.m Putting the network together This part has been done for you and is available in the function convnet forward. This function takes the parameters, layers and input data and generates the outputs at each layer of the network. It also returns the probabilities of the image belonging to each class. You are encouraged to look into the code of this function to understand how the data is being passed to perform the forward pass.Part 3 Training The function conv_net puts both the forward and backward passes together and trains the network. This function has also been implemented. Q 3.1 Training – 1 pts The script train_lenet.m defines the optimization parameters and performs the actual updates on the network. This script loads a pretrained network and trains the network for 3000 iterations. Report the test accuracy obtained in your write-up after training for 3000 more iterations. Save the refined network weights as lenet.mat in the same format as lenet_pretrained.mat. The accuracy should be above 95%.Q 3.2 Test the network – 1 Pts The script test_network.m has been provided which runs the test data through the network and obtains the prediction probabilities. Modify this script to generate the confusion matrix and comment on the top two confused pairs of classes (why they are confusing and etc.)Q 3.3 Real-world testing – 1 Pts Obtain real-world digit examples. Show the results of your system on at least 5 examples, which you obtained yourself (e.g., downloading from Internet, scribble yourself, taking an image yourself. Do not use samples from Part 5 here.). For this step, please manually crop a bounding box containing each digit as opposed to Part 5, which requires you to find a digit automatically.Part 4 Visualization Q 4.1 – 1 Pts Write a script vis_data.m which can load a sample image from the data, visualize the output of the second and third layers (i.e., CONV layer and ReLU layer). Show 20 images from each layer on a single figure file (use subplot and organize them in 4 × 5 format – like in Fig 4). To clarify, you take one image, run through your network, and visualize 20 features of that image at CONV layer and ReLU layer. Q 4.2 – 1 Pts Compare the feature maps to the original image and explain the differences.Part 5 Image Classification – 2 Pts We will now try to use the fully trained network to perform the task of Optical Character Recognition. You are provided a set of real world images in the images folder. Write a script ec.m which will read these images and recognize the handwritten numbers. The network you trained requires a grey-scale image with a single digit in each image.There are many ways to obtain this given a real image. Here is an outline of a possible approach: 1. Classify each pixel as foreground or background pixel by performing simple operations like thresholding. 2. Find connected components and place a bounding box around each character. You can use a matlab built-in function to do this. 3. Take each bounding box, pad it if necessary and resize it to 28×28 and pass it through the network.There might be errors in the recognition, report the output of your network in the report. For this part, you are allowed to use graythresh, adaptthresh, bwconncomp, bwlabel, and regionprops built-in functions. Appendix: List of all files in the project ● col2im_conv.m Helper function, you can use this if needed ● col2im_conv_matlab.m Helper function, you can use this if needed ● conv_layer_backward.m – Do not modify ● conv_layer forward.m – To implement ● conv_net.m – Do not modify ● convnet_forward.m – Do not modify ● get_lenet.m – Do not modify. Has the architecture. ● get_lr.m – Gets the learning rate at each iterations ● im2col_conv.m Helper function, you can use this if needed ● im2col_conv_matlab.m Helper function, you can use this if needed ● init_convnet.m Initialise the network weights ● inner_product_backward.m – To implement ● inner_product_forward.m – To implement ● load_mnist.m – Loads the training data. ● mlrloss.m – Implements the loss layer ● pooling_layer_backward.m Implemented, do not modify ● pooling_layer_forward.m – To implement ● relu_backward.m – To implement ● relu_forward.m – To implement ● sgd_momentum.m – Do not modify. Has the update equations ● test_network.m – Test script ● train_lenet.m – Train script ● vis_data.m – Add code to visualise the filters ● lenet_pretrained.mat – Trained weights ● mnist_all.mat – Dataset Notes Here are some points which you should keep in mind while implementing: ● All the equations above describe the functioning of the layers on a single data point. Your implementation would have to work on a small set of inputs called a ”batch” at once. ● Always ensure that the output.data of each layer has been reshaped to a 2-D matrix.

$25.00 View

[SOLVED] Cmpt412 project 4 augmented reality with planar homographies

In this project, you will be implementing an AR application step by step using planar homographies. Before we step into the implementation, we will walk you through the theory of planar homographies. In the programming section, you will first learn to find point correspondences between two images and use these to estimate the homography between them. Using this homography you will then warp images and finally implement your own AR application.These instructions are the same as before. 1. Students are encouraged to discuss projects. However, each student needs to write code and a report all by him/herself. Code should NOT be shared or copied. Do NOT use external code unless permitted.2. Post questions to Canvas so that everybody can share, unless the questions are private. Please look at Canvas first if similar questions have been posted. 3. Please upload a pdf ({Your-SFUID}.pdf) and a zip package as before. The package must contain the following in the following layout (they will be different for the other projects but will be similar): ○ {SFUID} ■ matlab ■ Result4. File paths: Make sure that any file paths that you use are relative and not absolute so that we can easily run code on our end. For instance, you cannot write “imread(‘/some/absolute/path/data/abc.jpg’)”. Write “imread(‘../data/abc.jpg’)” instead. 5. If a movie is too large and your file size is bigger than the upload limit. You can use an external link such as google drive or dropbox. 6. As indicated below, project 5 will have 17 pts.A planar homography is a warp operation (which is a mapping from pixel coordinates from one camera frame to another) that makes a fundamental assumption of the points lying on a plane in the real world. Under this particular assumption, pixel coordinates in one view of the points on the plane can be directly mapped to pixel coordinates in another camera view of the same points.There exists a homography H that satisfies equation 1 below, given two 3×4 camera projection matrices P1 and P2 corresponding to the two cameras and a plane Π. x1 ≡ H x2 (1) The ≡ symbol stands for identical to or equal up to a scale. The points x1 and x2 are in homogeneous coordinates, which means they have an additional dimension. If x1 is a 3D vector [xi yi zi]^T , it represents the 2D point [xi/zi yi/zi]^T (called heterogeneous coordinates).This additional dimension is a mathematical convenience to represent transformations (like translation, rotation, scaling, etc) in a concise matrix form. The ≡ means that the equation is correct to a scaling factor. Note: A degenerate case happens when the plane Π contains both cameras’ centers, in which case there are infinite choices of H satisfying equation 1.A very common problem in projective geometry is often of the form x ≡ Ay, where x and y are known vectors, and A is a matrix which contains unknowns to be solved. Given matching points in two images, our homography relationship clearly is an instance of such a problem. Note that the equality holds only up to scale (which means that the set of equations are of the form x = λHx′), which is why we cannot use an ordinary least squares solution such as what you may have used in the past to solve simultaneous equations.A standard approach to solve these kinds of problems is called the Direct Linear Transform, where we rewrite the equation as proper homogeneous equations which are then solved in the standard least squares sense. Since this process involves disentangling the structure of the H matrix, it’s a transform of the problem into a set of linear equation, thus giving it its name.Let x1 be a set of points in an image and x2 be the set of corresponding points in an image taken by another camera. Suppose there exists a homography H such that: xi1 ≡H xi2 (i∈{1…N}) where xi1 = [xi1[1] x1i[2] 1]^T are in homogeneous coordinates, xi1 ∈ x1 and H is a 3 × 3 matrix. For each point pair, this relation can be rewritten as Aih = 0 where h is a column vector reshaped from H, and Ai is a matrix with elements derived from the points xi1 and xi2. You can solve for h by finding the right null space by Singular Value Decomposition or Eigen Decomposition as described below.3.1. Eigenvalue Decomposition One way to solve Ax = 0 is to calculate the eigenvector corresponding to the smallest eigenvalue, when A is a square matrix. Consider this example: Using the Matlab function eig, we get the following eigenvalues and eigenvectors: Here, the columns of V are the eigenvectors and each corresponding element in D is an eigenvalue. The second eigenvalue is 0, and hence that is the solution to our problem. However, h has a dimension of 9. One point correspondence provides 2 constraints. So, if you utilize all the information, you may never encounter this scenario in solving homographies, that is, you never have a square matrix (8×9 or 10×9 matrices for example).3.2. Singular Value Decomposition The Singular Value Decomposition (SVD) of a rectangular matrix A is expressed as: A = UΣV T Here, U is a matrix of column vectors called the “left singular vectors”. Similarly, V is called the “right singular vectors”. The matrix Σ is a rectangular matrix with off-diagonal elements 0 (or only diagonal elements are non-zero). Each diagonal element σi is called the “singular value” and these are sorted in order of magnitude. In our case, you might see 9 values.● If σ9 = 0, the system is exactly-determined, a homography exists and all points fit exactly. The corresponding right singular vector in V is then the solution we want. ● If σ9 ≥ 0, the system is over-determined. A homography exists but not all points fit exactly (they fit in the least-squares error sense). This value represents the goodness of fit. The corresponding right singular vector in V is then the solution we want.● Usually, you will have at least four correspondences. If not, the system is underdetermined. We will not deal with those here. The columns of U are eigenvectors of AAT . The columns of V are the eigenvectors of AT A. With this fact, the following holds. If A is not a square matrix, then you can solve Ah=0 by finding the eigenvector corresponding to the smallest eigenvalue of AT A (instead of SVD if you want).4.1. Feature Detection, Description, and Matching (3 pts) Before finding the homography between an image pair, we need to find corresponding point pairs between two images. But how do we get these points? One way is to select them manually (using cpselect), which is tedious and inefficient. The CV way is to find interest points in the image pair and automatically match them.In the interest of being able to do cool stuff, we will not implement a feature detector or descriptor here, but use built-in MATLAB methods. The purpose of an interest point detector (e.g. Harris, SIFT, SURF, etc.) is to find particular salient points in the images around which we extract feature descriptors (e.g. MOPS, etc.). These descriptors try to summarize the content of the image around the feature points in as succinct yet descriptive manner possible (there is often a trade-off between representational and computational complexity for many computer vision tasks; you can have a very high dimensional feature descriptor that would ensure that you get good matches, but computing it could be prohibitively expensive).Matching, then, is a task of trying to find a descriptor in the list of descriptors obtained after computing them on a new image that best matches the current descriptor. This could be something as simple as the Euclidean distance between the two descriptors, or something more complicated, depending on how the descriptor is composed. For the purpose of this exercise, we shall use the widely used FAST detector in concert with the BRIEF descriptor.Now implement the following function: [locs1, locs2] = matchPics(I1, I2) where I1 and I2 are the images you want to match. locs1 and locs2 are N × 2 matrices containing the x and y coordinates of the matched point pairs. Use the Matlab built-in function detectFASTFeatures to compute the features, then build descriptors using the provided computeBrief function and finally compare them using the built-in method matchFeatures. Use the function showMatchedFeatures(im1, im2, locs1, locs2, ‘montage’) to visualize your matched points and include the result image in your write-up. An example is shown in Fig. 2.There is a threshold parameter on matchFeatures() that must be tweaked to see things: matchFeatures(…, ‘MatchThreshold’, threshold); Threshold should be 10.0 at default for binary descriptors and 1.0 otherwise. BRIEF is a binary descriptor, but matlab fails to set 10.0 for some reason (use 1.0 instead). Specify the threshold to be 10.0 for BRIEF descriptor. You may also need to increase MaxRatio parameter.We provide you with the function: [desc, locs] = computeBrief(img, locs in) which computes the BRIEF descriptor for img. locs in is an N × 2 matrix in which each row represents the location (x, y) of a feature point. Please note that the number of valid output feature points could be less than the number of input feature points. desc is the corresponding matrix of BRIEF descriptors for the interest points.4.2. BRIEF and Rotations (3 pts) Let’s investigate how BRIEF works with rotations. Write a script briefRotTest.m that: ● Takes the cv cover.jpg and matches it to itself rotated [Hint: use imrotate] in increments of 10 degrees. ● Stores a histogram of the count of matches for each orientation. ● Plots the histogram using plotVisualize the feature matching result at three different orientations and include them in your write-up. Explain why you think the BRIEF descriptor behaves this way. Next, use a feature detector detectSURFFeatures and extractFeatures(…, ‘Method’, ‘SURF’) instead and show the results. Does the plot change significantly?4.3. Homography Computation (3 pts) Write a function computeH that estimates the planar homography from a set of matched point pairs. function [H2to1] = computeH(x1, x2) x1 and x2 are N × 2 matrices containing the coordinates (x, y) of point pairs between the two images. H2to1 should be a 3 × 3 matrix for the best homography from image 2 to image 1 in the least-square sense. You can use eig or svd to get the eigenvectors as described above in this handout. For at least one pair of images, pick a certain number of points (say randomly 10 points) from the first image, and show the corresponding locations in the second image after the homography transformation.4.4. Homography Normalization (2 pts) Normalization improves numerical stability of the solution and you should always normalize your coordinate data. Normalization has two steps: 1. Translate the mean of the points to the origin. 2. Scale the points so that the average distance to the origin (or you could also try “the largest distance to the origin” to compare) is sqrt(2). This is a linear transformation and can be written as follows: x’1 = T1 x1 x’2 = T2 x2 where x’1 and x’2 are the normalized homogeneous coordinates of x1 and x2. T1 and T2 are 3 × 3 matrices. The homography H from x’2 to x’1 computed by computeH satisfies: x’1 = H x’2By substituting x’1 and x’2 with T1 x1 and T2 x2 , we have T1 x1=H T2 x2 By following the above procedure, implement the function computeH_norm: function [H2to1] = computeH_norm(x1, x2).This function should normalize the coordinates in x1 and x2 and call computeH(x1, x2). Again, for at least one pair of images, pick a certain number of points (say randomly 10 points) from the first image, and show the corresponding locations in the second image after the homography transformation.4.5. RANSAC (2 pts) The RANSAC algorithm can generally fit any model to noisy data. You will implement it for (planar) homographies between images. Remember that 4 point-pairs are required at a minimum to compute a homography.Write a function: function [bestH2to1, inliers] = computeH_ransac(locs1, locs2) where bestH2to1 should be the homography H with most inliers found during RANSAC. H will be a homography such that if x2 is a point in locs2 and x1 is a corresponding point in locs1, then x1 ≡ H x2. locs1 and locs2 are N × 2 matrices containing the matched points. inliers is a vector of length N with a 1 at those matches that are part of the consensus set, and 0 elsewhere. Use computeH norm to compute the homography. For at least one pair of images, visualize the 4 point-pairs (that produced the most number of inliers) and the inlier matches that were selected by RANSAC algorithm.4.6. HarryPotterizing a Book (2 pts) Write a script HarryPotterize.m that 1. Reads cv_cover.jpg, cv_desk.png, and hp_cover.jpg. 2. Computes a homography automatically using MatchPics and computeH_ransac. 3. Warps hp_cover.jpg to the dimensions of the cv_desk.png image using the provided warpH function.4. At this point you should notice that although the image is being warped to the correct location, it is not filling up the same space as the book. Implement the function that modifies hp_cover.jpg to fix this issue: function [ composite img ] = compositeH( H2to1, template, img )5. Creating your Augmented Reality application (2 pts) Now with the code you have, you’re able to create your own Augmented Reality application. What you’re going to do is HarryPotterize the video ar source.mov onto the video book.mov. More specifically, you’re going to track the computer vision textbook in each frame of book.mov, and overlay each frame of ar source.mov onto the book in book.mov. Please write a script ar.m to implement this AR application and save your result video as ar.avi in the result/ directory. You may use the function loadVid.m that we provide to load the videos.Your result should be similar to the LifePrint project. You’ll be given full credits if you can put the video together correctly, while it is OK to have strange frames here and there. Also warped images may fluctuate as it is difficult to keep the results exactly temporarily consistent, which is also OK. See Figure 5 for an example frame of what the final video should look like.Note that the book and the videos we have provided have very different aspect ratios (the ratio of the image width to the image height). You must either use imresize or crop each frame to fit onto the book cover. The number of frames may be slightly different, and you do not have to worry about the glitch at the end of the video.Cropping an image in Matlab is easy. You just need to extract the rows and columns you are interested in. For example, if you want to extract the subimage from point (40, 50) to point (100, 200), your code would look like img cropped = img(50:200, 40:100). In this project, you must crop that image such that only the central region of the image is used in the final output. See Figure 6 for an example.

$25.00 View

[SOLVED] Cs6250 project 5 bgp hijacking attacks latest

In this project, using an interactive Mininet demo [1], we will explore some of the vulnerabilities of Border Gateway Protocol (BGP). In particular, we will see how BGP is vulnerable to abuse and manipulation through a class of attacks called BGP hijacking attacks. A malicious Autonomous System (AS) can mount these attacks through false BGP announcements from a rogue AS, causing victim ASes to route their traffic bound for another AS through the malicious AS. This attack succeeds because the false advertisement exploits BGP routing behavior by advertising a shorter path to reach a particular prefix, which causes victim ASes to attempt to use the newly advertised (and seemingly better!) route.A.Browse this paper as a reference for subsequent tasks and for some important background on Prefix Hijack Attacks. B.Refer to this resource on configuring a BGP router with Quagga. C.Check out the following example configurations: Example 1 and Example 2 D.Project Intro Presentation Video Link and Slides from CS6250 in Spring 2019 (there Project 7)The demo creates the network topology shown below, consisting of four ASes and their peering relationships. AS4 is the malicious AS that will mount the attack. Once again, we will be simulating this network in Mininet, however there are some important distinctions to make from our previous projects.In this set up, each container is not a host, but an entire autonomous system. Each AS runs a routing daemon (quagga), communicates with other ASes using BGP (bgpd), and configures its own isolated set of routing entries in the kernel (zebra). Each AS has an IP address, which is the IP address of its border router.NOTE: In this topology solid lines indicate peering relationships and the dotted boxes indicate the prefix advertised by that AS.1. First, download and unzip the Project-5 files (modify permissions if necessary). 2. Next, in the Project-5 directory, start the demo using the following command: o sudo python bgp.py 3. After loading the topology, the Mininet CLI should be visible. Keep this terminal open throughout the experiment.4. Start another terminal and navigate to the Project-5 directory. We will use this terminal to start a remote session with AS1’s routing daemon: o ./connect.sh 5. This script will start quagga, which will require access verification. The password is: o en 6. Next, use the following commands to start the admin shell and view the routing table entries for AS1: o en o You will be prompted for the password again, retype en o sh ip bgp7. You should see output very much like the screen grab below. In particular, notice that AS1 has chosen the path via AS2 and AS3 to reach the prefix 13.0.0.0/8: 9.Next, let’s verify that network traffic is traversing this path. Open a third terminal and navigate to the Project-5 directory. In this terminal you will start a script that continuously makes web requests from a host within AS1 to a web server in AS3: ./website.sh10.Leave this terminal running as well, and open a fourth terminal, also in the Project-5 directory. Now, we will start a rogue AS (AS4) that will connect directly to AS1 and advertise the same 13.0.0.0/8 prefix. This will allow AS4 to hijack the prefix due to the shorter AS Path Length: ./start_rogue.sh11.Return to the third terminal window and observe the continuous web requests. After the BGP routing tables converge on this simple network, you should eventually see the attacker start responding to requests from AS1, rather than AS3.12.Additionally, return to the second terminal and rerun the command to print the routing table. You may need to repeat the steps to establish the remote session if it closes due to inactivity. You should now see the fraudulent advertisement for the 13.0.0.0/8 prefix in the routing table, in addition to the longer unused path to the legitimate owner.13.Finally, let’s stop the attack by switching to the fourth terminal and using the following command: ./stop_rogue.sh14.You should notice a fairly quick re-convergence to the original legitimate route in the third terminal window, which should now be delivering the original traffic. Additionally, you can check the BGP routing table again to see the original path is being traversed.As demonstrated in Part 2, network virtualization can be very useful in demonstrating and analyzing network attacks that would otherwise require a large amount of physical hardware to accomplish. In Part 3, you are tasked with replicating a different topology and attack scenario to demonstrate the effects of a different instance of a Prefix Hijack Attack.1. To start, we recommend making a working copy of the code provided to you in the Project-5 directory. You will likely find this project to be more approachable if you spend time exploring the demo code and fully understanding how each part works rather than immediately trying to edit the code.2. Next, refer to the referenced paper in Part 1A, and locate Figure 1.3. Edit the working copy of the demo code you just made to reconstruct the topology in Figure 1. When complete, you should be able to use the commands from Part 2 to explore the routing tables generated by each border router. For our purposes, you can assume:a. All links to be bidirectional peering links. b. Each AS advertises a single prefix: AS1: 1.0.0.0/8, AS2: 2.0.0.0/8, AS3: 3.0.0.0/8, AS4: 4.0.0.0/8, AS5: 5.0.0.0/8, AS6: 1.0.0.0/8 (Note: We highly recommend using these prefix values in your configuration to simplify grading and for consistency in communication and discussion in Piazza. However, you may use any valid prefix values in your configuration.) c. The number of hosts in each AS is the same as in the provided code.4. Do not change passwords in zebra and conf files. If you change the passwords, the auto-grader will fail resulting in 0 for the assignment. ——–5. Next, locate Figure 2 in the referenced paper. Draw a topology map using any drawing tool of your choice. You may hand-draw your topology with pencil and paper and scan or photograph your drawing. All configuration values drawn on the map must be legible. Save your topology diagram in PDF format with the name fig2_topo.pdf. You must use this filename as part of your submission to receive credit for your diagram.6. Continue to adapt the code in your working copy to simulate this hijack scenario. When complete, you should be able to use the commands from Part 2 to start a Rogue AS and demonstrate a similar change in routing table information as was shown in Part 2.7. Finally, create a compressed file (zip format) named Part3.zip containing your entire attack demonstration. You must include all of the files necessary to run your demo in an empty directory – do NOT assume that we will provide any of the files necessary to run your demonstration for grading purposes. Include your fig2_topomap.pdf file in your Part3.zip.• When viewing the BGP Tables note the “Status codes”. Give your topology enough time to converge before recreating the hijack simulation portion. It may take a minute or so for your topology to fully converge. You may continue to check the BGP Tables to determine whether the topology has converged• The order that you set up your peering links using addLink() matters. In previous projects, we manually selected which port on the switch to use. There is an optional parameter to the addLink() call which allows you to specify which switch port to use. In this project, you will not use those options. Therefore, the order of the links matters.• Some of the commands in the boilerplate code may not be necessary to complete Part 3. Some of it is there just so that you know it exists. • Check for more descriptive errors in the /logs directory. See the zebra files for the location of additional log files.• Run “links” on the Mininet CLI terminal to see if all links are connected and OK OK. • Run “net” on the Mininet CLI terminal to see if your ethernet links are connected as you expect. • Run “ifconfig -a” on all routers and hosts to ensure that all IP addresses are assigned correctly.• Run “sh ip bgp” and “sh ip bgp summary” on all routers. • The command pingall may not work and that is fine. • The website.sh may sometimes hang intermittently. If this happens restart the simulation. We are aware of this issue, and we keep this in mind as we grade your submission. You will not lose points if website.sh hangs so long as we are eventually able to run the simulation.• Watch the Intro presentation and read through the additional debugging tips on the intro slides.This part of the project is optional, but it is worth extra credit if you complete it. Your task here is to design and implement a countermeasure to the attack demonstrated in Part 3. We recommend you start by creating a complete copy of the code you produced in Part 3, and paste it to a fresh working directory.Next, design and implement a countermeasure to the attack from Part 3. When complete, you should be able to use the commands from Part 2 to launch the simulation, and start a Rogue AS that mounts a Prefix Hijack attack as in Part 3. In this case, the attack should fail and you should be able to observe the victim AS routing table maintain (or revert back to) it’s original state before the attack commences.The paper referenced in Part 1A describes some example countermeasures, and you can implement / modify them as required for this project. You are also free to explore other methods; this Part is open ended. The first stipulation is that the solution you implement be applicable in the general case, meaning it is not a hard-coded defense.Your defense should work regardless of which AS is attacked, which AS mounts the attack, and what prefix is targeted. The second is that the countermeasure must be demonstrable on the course VM. It is permissible to use additional libraries in the development of your countermeasure; however, they must be documented so the grader can install them prior to grading your code.As was done in Part 3, create a compressed file (zip format) named Part4.zip containing your entire countermeasure demonstration. You must include all of the files necessary to run your demo in an empty directory – do NOT assume that we will provide any of the files necessary to run your demonstration for grading purposes. Additionally, you should provide a supplementary document (PDF format) named Countermeasure.pdf.This document should provide the following: 1. A brief summary of how your solution counters the attack 2. A list of files you modified from Part 3 or created in order to implement the countermeasure 3. A brief description of what is changed in each file, (or the purpose of newly created files) including how it functions as a part of the larger system.4. Instructions for demonstrating the countermeasure, including instructions for installing required software / libraries. 5. A brief closing containing any additional information the grader may need to reproduce your countermeasure and contact information (if different than your GT student email address) in case the grade has questions.For this project you need to turn in the Part3.zip file you created in Part 3. Include your topology diagram fig2_topo.pdf in Part3.zipIf you chose to pursue the extra credit, also turn in Part4.zip file and Countermeasure.pdf files you created in Part 4. Please upload Part3.zip and Part4.zip and Countermeasure.pdf directly into canvas, there is no need to zip these three files into another zip. So please make sure you submit these three files on canvas directly.While discussion of the project in general is always permitted on Piazza, you are not permitted to share your code generated for Part 3 or Part 4. You may quote snippets of the unmodified skeleton code provided to you when discussing the Project. You may not share yout topology diagram you created in Part 3 Step 5.Rubric (out of 150 points) 5 pts Submission for turning in all the correct demo files with the correct names, and significant effort has been made towards completing the project. 5 pts Fig 2 TopoDiagram For turning in the correctly named Topology diagram file: fig2_topo.pdf with legible configuration values. 140 pts AttackDemo for accurately recreating the topology, links, router configuration, and attack per the instructions. Partial credit is available for this rubric item. 50 pts Extra Credit For correctly designing and implementing a countermeasure to the attack from Part 3. Submissions MUST include both the code and documentation – extra credit will not be considered for code without accompanying documentation.Some partial credit may be provided for thorough Countermeasures.pdf identifying a viable solution without accompanying code or with non-working code if the documentation acknowledges the lack of code or the failing code. [1] This Project inspired by a Mininet Demo originally presented at SIGCOMM 2014.

$25.00 View

[SOLVED] program C/CPython

Homework 3: Tiny Calculator parsing with YACC/Bison)OverviewThis assignment builds on Homework 2 and focuses on extending your knowledge of compiler design. Specifically, you willcomplete the syntax analysis phase by combining the deliverables from Homework 2 (lexical analysis using flex) with thisassignment's deliverable (syntax analysis using yacc or its GNU version, bison).Feel free to seek ChatGPT help under the following guidelines:Use it for suggestions, but ensure the work you submit is your own.Share your experience in details in your README or submission comments if ChatGPT played a significant role insolving the problem.ObjectivesSimilar to Homework 2 (Lexical Analysis), the objectives of this assignment are:1. Understanding the syntax analysis process as the application of the grammar of any programming language.2. Many other business applications outside of the programming language domain need lexical and syntax analysis to handlethe user inputs in a more rigorous and user-friendly way.3. Learning aged but extremely useful and popular tools, (f)lex and yacc (or bison). In homework 3, you are asked to use both(f)lex and bison (yacc).Before you start: PreparationRead "Section 4.1 Introduction" in the textbook to get the basic ideas and background knowledge.The front end of the compiler design is syntax analysis, which consists of two parts: lexical analysis, and syntacticanalysis. In the previous homework, you used the Unix (f)lex tool to understand how lexical analysis works. In thishomework, you are asked to combine the deliverable of the previous homework with the deliverable of this homework tocomplete the full syntax analysis. You will be using yet another Unix tool, yacc (Yet Another Compiler Compiler) or bison(yacc's GNU version) for syntax analysis (parsing).Review class notes on the yacc/bison tool.Refer to the diagram illustrating interactions between lex and yacc , noting that every time the parser ( yacc/bison ) needsa token, it calls Lex::yylex() .Below is the diagram describing lex and yacc interactions. Note that every time the parser (yacc/bison) needs a token,Yacc/bison::yyparese() calls Lex::yylex(). Homework 3: Tiny Calculator parsing with YACC/Bison)%23view_name%3Dmon 1/7Additional references: yacc/bison tool references:precedence as well as associativityRead both the Operator Precedence and Context-Dependent Precedence sections for handling generaloperator precedence rules and unary minus operatorTo Do1. Write BNF grammar rulesWrite BNF grammar rules to implement the tiny calculator with the following features. The grammar rules with detaileddescriptions must be listed in a comment section at the beginning of your code file or in a separate Markdown file.statement_list: list of binary expression statementsstatement (assignment): var = expressionexpression:(expression) : An expression in parenthesis to allow users to set precedence5 binary arithmetic operations: +, -, *, /, ^.- The behavior of each binary operation is the same as in Homework 2.Variables: Support C-like identifiers to store numbersNumbers: Support C-like signed integer or signed float numbers. (stored as double type internally)NOTE: The calculator should support signs (+, -) e.g., 2 - -3 + 2 - 7 - -2 (output: 2)2. Implement the Tiny CalculatorUse Unix yacc/bison tool to implement a rudimentary tiny calculator that:computes the following basic arithmetic operation expressions.I. addition: +II. subtract: -III. multiplication: *IV. division: / Homework 3: Tiny Calculator parsing with YACC/Bison)%23view_name%3Dmon 2/7V. exponents: ^Accepts user-entered binary arithmetic expressions, one per line.Processes multiple expressions interactively until the user exits.To commit operations, the user enters the RETURN key at the end of the statement.Note that the user should be able to enter any number of expressions. See the Expected Output [8][9][10] cases.For each calculation, the user may enter either an expression or assignment, as shown in the expected output file.The input number for each operand can be an integer or double-precision floating point number.Follows operator precedence and associativity rules.Error HandlingYour program must recognize and handle the following errors:Incorrect number formatInvalid grammar or missing operatorsinvalid infix binary arithmetic operationrejecting any letters in the expression (invalid operands and/or operator type)unmatching (nested) parenthesesDivide by zero errorReferring to undefined variablesYou should test all the test cases in the Expected Outputs section below. Your results must be consistent withthe expected outputs.3. Compare flex vs. yacc/bisonIn your README, explain what tasks could not be performed using only the flex tool in Homework 2 but can now beachieved using the yacc/bison tool. Provide clear reasoning for your observations.Hints and guidelines:1. Full implementation of the MIT bison Example: readme.md - Postfix Notation Calculator - Replit(https://replit.com/@sungheenam/Postfix-Notation-Calculator#readme.md)This REPL fully implements the example in the MIT example (Bison - Examples (mit.edu)(http://web.mit.edu/gnu/doc/html/bison_5.html) ) above.You may fork from the repl and read the readme.md file and sample output first before playing around with it. Therepl also has a makefile example; it is not the best one, but it would help you simplify the build process.2. More examples in addition to the MIT example aboveExample program for the lex and yacc programs - IBM Documentation (https://www.ibm.com/docs/en/aix/7.1?topic=information-example-program-lex-yacc-programs) (A good starting point. It's somewhat similar to thisassignment)3. flex programmingUpdate the flex program from HW2 to work with the bison code in HW3. Note that most of the programming logicwould be moved to the bison file. Basically, using the flex tool truly as the lexical analyzer purpose only.Include ".tab.h" in the flex program. The *.tab.h file will be automatically generated when the bisonscript gets processed. Read the postfix readme.md file linked above.For this homework, you don't need to define any subroutines except yywrap() in flex because those subroutines inflex including main() will be migrated to the bison program.In the regular expression rules section of your flex code, you need to return the token and its lexeme when a tokendefined in the bison file is recognized. For example, for var token in "stmt: var = expr",In the definition section of the flex code:var [_[:alpha:]][_[:alnum:]]*? Homework 3: Tiny Calculator parsing with YACC/Bison)%23view_name%3Dmon 3/7In the rules section, write a C++ code to pass the var token:{var} {yylval.var = new std::string(yytext); return VAR;}??4. yacc/bison programming.Feel free to use C++ instead of C for Bison programming. Refer to the makefile example below to compile with"g++" instead of "GCC".Note that in your C++ code, you can't use the "using namespace std" macro, rather, you must use fully qualifiedidentifiers such as std::string, std::map, std::cout, std::endl, ...Define the following in the declaration section:In the C code definition section:A. external functions?extern int yylex();extern int yyparse();extern void yyerror(const char* s);B. Storing values for the variables: You can use any data structure for the purpose, but I recommend using a C++map (dictionary) data structure.std::map vars; // a dictionary storing variable names and their valuesIn the bison definition section:A. Associating yylval with tokens' Values:yylval is used to pass semantic values from the lexer (Flex) to the parser (Bison). It acts as a communicationchannel between the scanner (lexer) and the parser.Steps to Associate yylval with Tokens' Values1. Define a union for Token Values ( %union )2. Associate Each Token with a Data Type ( %token )3. Assign Values in the Lexer ( yylex() ) - see above in "flex programming"4. Use yylval in the Grammar RulesData structure to store the values of some tokens (see below for hints of its usage)%union { ? double dval; /* to store numbers token value */ ? std::string *var; /* to store variable ID */}A. token (from flex) for terminals. Tokens are what's returned by the flex tool:%token e.g., % token NUMBER /* NUMBER token returns a double number */B. type for non-terminals defined in the grammar if they return values:%type e.g., % type expr /* expr return a double number */C. Association rules for operators:%right or%leftD. Precedence rules for operators - Order matters!The precedence of operators is determined by the order in which they appear, with the lowest precedence atthe top and the highest at the bottomDefine rules (grammar)An assignment rule (stmt: var = expr) may associate a variable in the dictionary (vars) with the value of expre.g. vars[*$1] = $3; Homework 3: Tiny Calculator parsing with YACC/Bison)%23view_name%3Dmon 4/7A good example of defining rules that is similar to this assignment: Bison - Examples (mit.edu):(http://web.mit.edu/gnu/doc/html/bison_5.html) Infix Notation Calculator: calc(http://web.mit.edu/gnu/doc/html/bison_toc.html#SEC27)Build Instructions exampleA make file example: (https://ucdenver.instructure.com/courses/558317/files/25220969?wrap=1)(https://ucdenver.instructure.com/courses/558317/files/25220969/download?download_frd=1)Disclaimer: This makefile includes a basic set of commands and is intended as a beginner's guide to understandingmakefile formats. It should be treated as a starting point.For effective use, always execute 'make clean' before running 'make' from the "Shell" tab rather than clicking the green"Run" button. Additionally, note that this makefile contains additional commands designed to rename generated *.c filesto *.cpp files, as the 'g++' compiler is employed instead of 'gcc'.Entering command sequence for C++ manually://lex program : calc.l, yacc program : calc.y$flex calc.l$bison -dtv calc.y # use bison instead of yacc$mv -f lex.yy.c lex.yy.cpp$mv -f calc.tab.c calc.tab.cppg++ -c -std=c++11 lex.yy.cpp -lmg++ -c -std=c++11 calc.tab.cpp -lmg++ -o calc *.o -lstdc++ -lmExpected OutputsExpected Output example file (https://ucdenver.instructure.com/courses/558317/files/25872913?wrap=1)(https://ucdenver.instructure.com/courses/558317/files/25872913/download?download_frd=1)Your output should include, at least, all the test cases in the fileREPL Setup:First, create a new REPL with "Bash", not a "C++" or "C".Installing Flex: When you execute the 'flex' command for the first time from the REPL console, it will prompt you toinstall flex. From the two available options, select the "flex" option.Installing Bison/Yacc: Upon running the 'bison' or 'yacc' command from the REPL console, you will be prompted toinstall a bison/yacc application. Select the "yacc" option, not 'bison_3_5'. The current REPL version encountersinstallation issues with the 'bison_3_5' application for reasons unknown to us.In Case that Bison_3_5 has been installed:If you have installed bison_3_5 already, perform the following steps to fix the issue:1. Click on the three dots (the "more" icon) located in the File Navigation window s leftmost column.2. Choose "Show hidden ..." (the last option in the list). This will show all hidden files.3. In the File Navigation window s lower section, locate the "replit.nix" file. Homework 3: Tiny Calculator parsing with YACC/Bison)%23view_name%3Dmon 5/74. This file holds your REPL configuration information. Within the 'deps = [ .... ]' section, if an entry for the"pkgs.bison_3_5" instance is present, manually remove the line.5. Run the "bison -dtv .y" command from the command line window, ensuring you choose the"yacc" option this time.Note: For our assignment s intent, "yacc" is equally good as "bison".DeliverableRead the rubric first before you submit it. Submit the following items:(f)lex and yacc/bison source codes. Please submit two sets of identical files - one with the original source code files and theother with *.txt extensions for my review.An output file demonstrating the test results, which cover the operations in the Expected Output section above.A readme.md containing:the answers to the comparison task from Step 3 in the "ToDo" listBNF grammar rules and program documentationOrREPL "join" link containingflex and bison source codesThe output file with your test results covers at least the operations in the Expected Output section above.readme.md file for answering the tasks and the BNF documentation of your programExtra Credit (Your own project): up to 10 pointsCan you think of any project you have worked on or would work on in the future where yacc/lex can help to simplify a front-endinterface? Submit:A one-page proposal with a synopsis of the project that describes how the lex/yacc tool would help your project.(f)lex and yacc/bison source code and output demonstrating implementation of the project.View RubricHW3 Bison - Tiny CalculatorCriteria PointsDescription ofcriterion/5 ptswhy_bisonUsing this assignment as example, describe the tasks that could not be done or wereextremely difficult to implement if flex alone were used.5 pts Homework 3: Tiny Calculator parsing with YACC/Bison)%23view_name%3Dmon 6/7Choose a submission typeBisonImplementation/40 ptsRules/5 ptsExtra credit/0 ptsBison Implementation- General arithmetic operations completeness: 10* -2 if not displaying calculation number- Precedence and association of operators: 5* -2 if unary sign precedence (+/-) is not properly handled- Handling variables correctly: 10 points* -3 if no variable update message is displayed* -3 if the variable output is not properly displayed- Interworking with flex: 5 points- Error handling: 10 points* -2 if not check if a referenced variable is defined or not* -3 if no DBZ check* -2 if displaying output when there is an error40 ptsRules- Correctly listing all the rules (productions)5 ptsExtra Credit- Proposal: 2- Completeness of implementation: 80 ptsText Web URL Upload MoreSubmit Assignment Homework 3: Tiny Calculator parsing with YACC/Bison)

$25.00 View

[SOLVED] CDS526 c/cPython

CDS526: Artificial Intelligence-based OptimizationCase Study: Multi-objective Optimisation1 TaskThis case study is composed of two main tasks, problem solving (detailed in Section 2) and paper presentation(detailed in Section 3), aiming at strengthening your understanding of multi-objective optimisation and appli cations of multi-objective optimisation algorithms. This case study will take 20% in your final mark of thiscourse (thus 20 points).This is a group project. Each group should be composed of no more than four individuals. Each individual’smark depends on the correctness of the answers to the questions (cf. Section 2) and his/her performance ingroup presentations (cf. Section 3).2 Problem Solving (10 marks)ContextAn investor needs to select an appropriate portfolio from a set of investment options, aiming to minimizeinvestment risk degree (f1) and maximize investment return degree (f2). There are currently seven portfoliooptions, with their corresponding f1 (risk) and f2 (return) values as follows:A(1, 1), B(10, 9), C(5, 1), D(2, 3), E(8, 4), F(5, 5), G(7, 6)Here, a lower f1 value indicates lower risk, and a higher f2 value indicates higher return. f1 and f2 are integers∈ {1, . . . , 50}.Question 1: Portfolio comparison. (2 marks)(1.1) Comparing Portfolio F(5, 5) and Portfolio C(5, 1), which one is better? Analyse from the perspectives ofrisk and return and provide reasoning. (1 mark)(1.2) Comparing Portfolio G(7, 6) and Portfolio E(8, 4), which one is better? Analyse from the perspectives ofrisk and return and provide reasoning. (1 mark)Question 2: Identify all non-dominated solutions in the given seven portfolios. (1 mark)Question 3: Investor preference matching. (2 marks)There are currently two investors:• The first investor is conservative, aiming to minimize investment risk (f1) and having lower requirementsfor return (f2).• The second investor is aggressive, willing to take higher risks (f1) and aiming solely to maximize investmentreturn (f2).From the non-dominated solution set, select the most suitable portfolio for each investor and explain the rea soning.Question 4: Investment portfolio selection based on preferences. (5 marks)Assume that the investment portfolio options satisfy the formula f2 =p 252 − (f1 − 25)2, where f1 (risk) is aninteger ∈ {1, . . . , 25}. There are three investors, each with different importance weights for risk and return asfollows:1• Investor 1: wf1 = 0.2, wf2 = 0.8 (more focused on return).• Investor 2: wf1 = 0.5, wf2 = 0.5 (equal importance on risk and return).• Investor 3: wf1 = 0.9, wf2 = 0.1 (more focused on risk).Please design an appropriate method and implement the following tasks through programming:(4.1) Generate the portfolio set: Based on the formula mentioned above, generate all possible investment port folio options, i.e., the set of (f1, f2). (1 mark)(4.2) Design a scoring function: For each investor, design a scoring function in the form: Score = S(f1, f2, wf1, wf2)where wf1and wf2are the weights for risk and return, respectively, and f1 and f2 are the risk and return valuesof the portfolio. A larger score indicates a better matching. Explain the meaning of this scoring function. (1mark)(4.3) Calculate the score for each portfolio for the three investors based on the scoring function. (1 mark)(4.4) Identify the highest-scoring portfolio for each investor and output the results. (1 mark)(4.5) Result analysis and explanation: Analyse the highest-scoring portfolios for each investor and explain whythese portfolios align with their preferences. Discuss how changes in the weights wF 1 and wF 2 affect the finalportfolio selection. (1 mark)3 Paper Reading and Presentation (10 marks)A list of papers on applications of multi-objective optimisation is provided. Each group will select one of thoseto read and present the paper orally with slides on 28 April 2025 (10am-1pm & 4:30pm-7:30pm). A paper canonly be selected by no more than one group (first come, first serve).You are also encouraged to look for other papers on applications of multi-objective optimisation that arenot in the provided list. If such a case, please send the papers to the instructor of the course for approval first.All individuals in the group should participate and contribute to the paper reading, slides preparation andoral presentation.3.1 Presentation slidesPlease limit your slide count to approximately 8 to 12 slides. Below is an example structure/outline:• Title page: information of the paper (title, publication, year), group members, contribution percentage1.• Background and motivation/impact of the work: What is the topic? Why it is important and should beinvestigated.• Challenges & why multi-objective optimisation methods: What are the challenges of tackling such prob lems? Why using multi-objective optimisation methods (thus the necessity)? What are the multipleobjectives?• Contributions/claims/take-home messages of the work.• Problem formulation/modelling: input, output, search space, objective(s), constraint(s), etc. Focusing oncore messages instead of explaining mathematical formulations in details. But mathematical formulations(if any) should be provided on slides.• Theoretical analysis and/or experimental studies & discussion: What are the theoretical analysis and/orexperimental studies that support the claims/contributions of the paper? How is the outcome? Anyinsightful observation?• Further work & limitations of the work.• Your thoughts about the work: insights, criticism, etc.1Individual contribution to the presentation represented by a percentage ∈ {0%, 5%, 10%, 15%, . . . , 80%, 85%, 90%, 95%, 100%}.Contributions of all individuals in a group sum to 100%. If an individual’s contribution is claimed to be 0, all members shouldprovide a written letter to support the claim. The contributions can not be revised after slides submission deadline.23.2 Oral presentation• All students should present orally.• Note that those are normal lecture sessions, therefore all students should be present in both sessions(10am-1pm & 4:30pm-7:30pm, 28 April 2025).• We are going to randomly call on a group to present. A no-show results in 0 mark.• Each group can present for no more than 10 minutes, followed by 6 minutes Q&A2. Note that you will bestopped when time’s up.3.3 Evaluation• All groups/individuals will be assessed according to the following criteria:– Presentation slides (5 marks): correctness, clarity, conciseness, format, completeness. – This is groupassessment, score denoted as Ss.– Oral presentation + Q&A (5 marks): correctness, clarity, conciseness, completeness, understanding,etc. – This is individual assessment, score denoted as So.• This is a group work. If you work individually, your score is (Ss + So) × 0.9.• Assuming a group of n students (n ∈ {2, 3, 4}), with group score Ssfor slides and individual scoreS1o, . . . , Snofor oral presentation and Q&A, and individual contribution C1, . . . , Cn, respectively. If Ci = 0,then a student i’s score is.If an individual’ total score of problem solving and presentation is above 20, then the overflow will becounted as his/her bonus in the total mark of this course3.4 Submission4.1 What to submitEach student should submit a zip file named as casestudy-{groupnumber}.zip. Inside the zip, there shouldbe:• A pdf file named as solutions.pdf for problem solving task detailed in Section 2.• A pdf or pptx file named as presentation.pdf or presentation.pptx, respectively, to be used in theoral presentation.4.2 Where to submitUpload your zip file via Moodle.2The length of presentation and Q&A may be subject to change based on the number of groups.3The formulas for calculating scores may be subject to change due to the actual group size and numbers.34.3 Submission deadline23:59 (Beijing time) April 27 (Sunday), 2025.No further update or edit (even minor) is allowed after this deadline.A group will get 0 as score for problem solving if any of the following cases happens:• Plagiarism.• Missed the deadline for submission.A group will get 0 as score for presentation slides if any of the following cases happens:• No show.• Missed the deadline for submission.An individual will get 0 as score for oral presentation if any of the following cases happens:• No show.• Not presentation or negligible/meaningless presentation (e.g., presenting paper title and members’ names).4

$25.00 View

[SOLVED] MATH3030 c/cJava

MATH3030: Coursework, Spring 202517/03/2025• If you are a MATH4068 student, please stop reading and go and find the coursework forMATH4068. This assessment is for MATH3030 students only.• This coursework is ASSESSED and is worth 20% of the total module mark. It is split into two questions,of equal weight.• Deadline: Coursework should be submitted via the coursework submission area on the Moodle pageby Wednesday 30 April, 10am.• Do not spend more time on this project than it merits - it is only worth 20% of the module mark.• Format: Please submit a single pdf document. The easiest way to do this is to use R Markdown orQuarto in R Studio. Do not submit raw markdown or R code - raw code (i.e. with no output,plots, analysis etc) will receive a mark of 0.• As this work is assessed, your submission must be entirely your own work (see the University’s policyon Academic Misconduct).• Submissions up to five working days late will be subject to a penalty of 5% of the maximum markper working day. Deadline extensions due to Support Plans and Extenuating Circumstances can berequested according to School and University policies, as applicable to this module. Because of thesepolicies, solutions (where appropriate) and feedback cannot normally be released earlier than 10 workingdays after the main cohort submission deadline.• Report length: Your solution should not be too long. You should aim to convey the importantdetails in a way that is easy to follow, but not excessively long. Avoid repetition and long print-outs ofuninteresting numerical output.• Please post any questions about the coursework on the Moodle discussion boards. This will ensure thatall students receive the same level of support. Please be careful not to ask anything on the discussionboards that reveals any part of your solution to other students.• I will be available to discuss the coursework at our Tuesday or Thursday sessions during the semester. Iwill not be meeting students 1-1 to discuss the coursework outside of these times.Plagiarism and Academic Misconduct For all assessed coursework it is important that you submityour own work. Some information about plagiarism is given on the Moodle webpage.Grading The two questions carry equal weight, and both will be marked out of 10. You will be assessed onboth the technical content (use of R, appropriate choice of method) and on the presentation and interpretationof your results.1CourseworkThe file UN.csv is available on Moodle, and contains data from the United Nations about 141 differentcountries from 1952 to 2007. This includes the GDP per capita, the life expectancy, and the population.Load the data into R, and extract the three different types of measurement using the commands below:UN

$25.00 View

[SOLVED] program

CA Assignment 2Clustering AlgorithmsAssignment Number 2 of (2)Weighting 15%Assignment Circulated 10.03.2025Deadline 27.03.2025Submission Mode Electronic Via CanvasPurpose of assessment The purpose of this assignment is to demonstrate: (1) the under standing of the KMeans (2) the understanding of KMeans++(3)the understanding of evaluation metrics for clustering.Learning outcome assessed A critical awareness of current problems and research issues indata mining. (3) The ability to consistently apply knowledge con cerning current data mining research issues in an original mannerand produce work which is at the forefront of current develop ments in the sub-discipline of data mining.1. (20) Implement k-means clustering algorithm and cluster the dataset provided using it. Vary the value of kfrom 1 to 9 and compute the Silhouette coefficient for each set of clusters. Plot k in the horizontal axisand the Silhouette coefficient in the vertical axis in the same plot.2. (10) Generate synthetic data of same size (i.e. same number of data points) as the dataset provided and usethis data to cluster K Means. Plot k in the horizontal axis and the Silhouette coefficient in the verticalaxis in the same plot.3. (20) Implement k-means++ clustering algorithm and cluster the dataset provided using it. Vary the valueof k from 1 to 9 and compute the Silhouette coefficient for each set of clusters. Plot k in the horizontalaxis and the Silhouette coefficient in the vertical axis in the same plot.4. (20) Implement the Bisecting k-Means algorithm to compute a hierarchy of clusterings that refines the initialsingle cluster to 9 clusters. For each s from 1 to 9, extract from the hierarchy of clusterings the clusteringwith s clusters and compute the Silhouette coefficient for this clustering. Plot s in the horizontal axisand the Silhouette coefficient in the vertical axis in the same plot.5. (20) Compute the confusion matrix, macro-averaged Precision, Recall, and F-score for the clustering shownin Figure 1.Figure 1: Outcome of a Clustering Algorithm16. (10) For the same clusters as in Figure 1, compute B-CUBED Precision, Recall, and F-score.Important Notes1. No credit will be given for implementing any other type of clustering algorithms or using an existinglibrary for clustering instead of implementing it by yourself. However, you are allowed to use• numpy library (any function)• random module;• matplotlib for plotting; and• pandas.read csv, csv.reader, or similar modules only for reading data from the files.However, it is not a requirement of the assignment to use any of those modules.2. Your program• should run and produce all results for Questions 1, 2, 3 and 4 in one click without requiring anychanges to the code;• should output only the required data in a clearly structured way; it should NOT output anyintermediate steps;• should assume that the input file is named ‘dataset’ and is located in the same folder as theprogram; in particular, it should NOT use absolute paths.3. Programs that do not run will result in a mark of zero!4. Your code should be as clear as possible and should contain only the functionality needed to answer thequestions. Provide as much comments as needed to make sure that the logic of the code is clear enoughto a marker. Marks may be deducted if the code is obscure, implements unnecessary functionality, oris overly complicated.5. If you use module random to make some random actions, use a fixed seed value so that your programalways produces the same output.6. The answers of Questions 1 to 4 will be in the form of .py files and the answer for Question 5 and 6should be in a PDF format.7. The python code of the implementation of the algorithms should be included in the .py file, and notin the report.8. You may use or (re)use any portion of the function that calculates the Silhouette coefficient from thesolution to the tasks in Lab 6.9. For Question 1, the name of the coding file should be KMeans.py.10. For Question 2 the name of the coding file should be KMeansSynthetic.py.11. For Question 3 the name of the coding file should be KMeansplusplus.py.12. For Question 4 the name of the coding file should be BisectingKMeans.py.13. For Questions 1 to 4, markers will run python filename.py. This should be able to generate thecorresponding plot in the current directory.14. There will be a load dataset function for Question 1,3 and 4. This function will be used to process thedataset provided.15. For questions 1 to 4 there should be following functions defined in your code.Page 2• a function called plot silhouttee to write the code for plot number of clusters vs. silhoutteecoefficient values.• a function called ComputeDistance to computing the distance between two points.• a function called initialSelection which will choose initial cluster representatives or clusters.• a function called clustername(x,k) where x is the data and k is the value of the number of clusters.16. For question 1 to 3, Following functions should be there.• a function named assignClusterIds that will assign cluster ids to each data point.• a function named computeClusterRepresentatives which will compute the cluster representations.17. For Question 4, computeSumfSquare function to compute the sum of squared distances within a cluster.18. You can use the KMeans function implemented for question 1 in Question 2 and 4.19. Each function should have a comment. Each comment should describe input, output and what thefunction does.20. Edge case conditions should be handled (e.g. File not given, File corrupted, only 1 datapoint in thefile).21. Your submission should be your own work. Do not copy or share! Make sure that you clearly understandthe severity of penalties for academic misconduc.22. Plotting should generate the plot in my current folder23. You’re free to include as many functions in your program as you need. Nevertheless, you should haveat least the functions specified earlier.24. A sample program structure for KMeans is given below just for the illustration purpose. You can followdifferent program structure with same functions.Page 3Figure 2: Sample Code StructurePage 4

$25.00 View

[SOLVED] PALS0039 /C

PALS0039 Introduction to Deep Learning for Speech and Language Processing Year: 2024-2025 Assessment: Coursework Period: Central Assessment Weighting: 80% Level: UG6, UG7 and PG7 Word count: 2500 words maximum (2000 text + 500 code)Component: 001 Deadline: Monday, 28 April 2025Please ensure you read and follow the Coursework Submission and Penalties page AI usage: You are allowed to use AI to assist with generating code. You are not allowed to use AI for any other purpose. Whether you use AI to assist with generating code or not, you need to demonstrate that you understand the code, no marks will be given for code that is not explained in your own words.Coursework Description Make sure you read the whole description, including the marking criteriaAutocompletion:Humans can complete words, sentences (and even sounds) when parts were lost or were masked by noise. Likewise, text-editing programmes can make suggestions for the text that follows. This is what you will be doing in this task:Use deep learning to build a model that predicts the next three characters (e.g., “Merry Christ…” -> “mas”). Evaluate the training and performance of the model. Present the code in a manner that makes it easy to use for others. In your discussion, comment on why you chose your model and parameters. A good discussion presents further architectures and why you did not choose them. If the model does not perform well, explain what would be needed to improve it. Marking (see below) will be based on the design, implementation and evaluation of the deep learning approach, not necessarily on the accuracy achieved.For your database, you can choose or combine from any of the ebooks that are uploaded to Moodle in the assignment section. Your model must not have used any other data. It is your task to create appropriate training and test sets from the data provided.Submission requirements• You should implement a working deep learning application as a Jupyter or Google Colab Notebook. • The notebook should contain text and code. The text should provide all thenecessary background, references, method, results analysis and discussion to explain the task as you might put in a lab report. The code should at a minimum demonstrate loading and processing of data, building a deep learning model and evaluation of its performance. • The solution should be original – that is, you should motivate your own design decisions, not simply follow advice found on the web.• No marks will be given on code alone. You need to demonstrate your understanding of the code and your choices.• It is not necessary to obtain state of the art performance on the task. The goal is to show that you know how to design, implement and run a deep learning task in speech or language. • For submission, you should run the notebook so that all text, code and outputs are visible, then save the whole as a PDF file for submission. The pdf file will be marked. The notebook itself should be submitted as an appendix or be linked and available during the marking period.• The use of tables and figures is encouraged, and contributes to a good presentation of the results.• The overall length of the text in the notebook (excluding code, comments in the code, outputs and bibliography) should be around 1500 words and must not exceed2000 words. Penalties will apply from 2001 words.• You should use comments in the code to adhere with good coding practice. The code and in-code comments count as a nominal 500 words but you may exceed this without penalty (though see point 4 of marking criteria, conciseness of presentation).Marking Criteria1. Coding of the implementation, including in-code comments, description of the code and demonstration of knowledge about deep learning models (50%) 2. Presentation of the results (20%) 3. Discussion of outcomes and conclusions of study (20%) 4. Use of Jupyter/Colab notebook and conciseness of presentation (10%) Note that there are differences in the standard marking scheme used for level 6 and level 7 submissions.

$25.00 View

[SOLVED] COMP2221

School of Computer Science: assessment briefModule title NetworksModule code COMP2221Assignment title CourseworkAssignment typeand descriptionProgramming assignment in JavaRationaleDesign and develop client and multi-threaded server ap plications in Java to solve a specified problemGuidance Detailed guidance provided later in this documentWeighting 30%Submission dead line2pm Monday 24th MarchSubmissionmethod GradescopeFeedback provisionMarks and comments for the submitted code returnedvia GradescopeLearning outcomesassessedDesign, implement and test network protocols and ap plications.Module lead David Head11. Assignment guidanceFor this coursework, you will implement client and multi-threaded server applicationsfor a simple voting system in which the server is initialised with the options to votefor, and clients can view the current number of votes for each option. Clients can alsovote for one of the options, which is then updated on the server.This coursework specification is for school Unix machines only, including the remoteaccess feng-linux.leeds.ac.uk. We cannot guarantee it will work on any otherenvironment.2. Assessment tasksTo get started, unarchive the file cwk.zip (you can do this from the command lineby typing unzip cwk.zip). You should then have a directory cwk with the followingstructure:cwk --- client --- Client.java|-- server --- Server.javaEmpty .java files for the client and server have been provided. Do not change thenames of these files, as we will assume these file and class names when assessing.You are free to add additional .java files to the client and server directories.The requirements for the server application are as follows:• Accept at least two options that can be voted for, each consisting of a single word,as command line arguments when launched, e.g.java Server rabbit squirrel duck• The server should immediately quit with an error message for less than two options.• Otherwise, it should run continuously.• Use an Executor to manage a fixed thread-pool with 30 connections.• If the client makes a vote for , the vote count for should beincreased by 1 (note all vote counts should initially be zero).• However, if does not exist, an error message should be returned.• Following a request by a client, return the current state of the poll with one lineper option, where each line contains at least the option and the current count. Seebelow for an example of valid output.• Create the file log.txt on the server directory and log every valid client request,with one line per request, in the following format:date|time|client IP address|requestwhere request is one of list or vote, i.e. you do not need to log the option forvote operations. Do not add other rows (e.g. headers, blank lines) to the log file.Note that you must create the log file, not overwrite or append an existing file. Anylog.txt file in your submission will be deleted at the start of the assessment.The requirements for the client application are as follows:• Accept one of the following commands as command line arguments, and performthe stated task:2• list, which displays the current state of the poll from the server and displaysit to the user. Each option should be output on the same line as their currentvote count, with a different line for each option. See below for an example ofvalid output.• vote , which requests that the server increases the vote count for by 1, and display the message returned by the server.• Exits after completing each command.Your server application should listen to the port number 7777. Both the client and theserver should run on the same host, i.e. with hostname localhost.All communication between the client and server must use sockets – they cannot accesseach other’s disk space directly. Your solution must use TCP, but otherwise you arefree to devise any communication format you wish, provided the requirements aboveare met.Neither the client nor the server should expect interaction from the user once they areexecuted. In particular, instructions to the Client application must be via commandline arguments. In the case of an invalid input, your client application should quit witha meaningful error message.3. General guidance and study supportIf you have any queries about this coursework, visit the Teams page for this module. Ifyour query is not resolved by previous answers, post a new message. Support will alsobe available during the timetabled lab sessions.You will need the material up to and including Lecture 11 to complete this coursework.You may like to first develop Client.java and Server.java to provide minimal func tionality, following the examples covered in Lectures 7 and 8. You could then addanother class that handles the communication with a single client. This will makeit easier to implement the multi-threaded server using the Executor. Multi-threadedservers were covered in Lectures 10 and 11. You will need to use input and outputstreams; these were covered in Lecture 6.Example sessionFirst cd to cwk/server, compile, and launch the server with two options to vote for:> java Server rabbit squirrelNow in another tab or shell, cd to cwk/client and compile. If you execute the followingcommands, the output should be something like that shown below.>java Client list’rabbit’ has 0 vote(s).’squirrel’ has 0 vote(s).> java Client vote rabbitIncremented the number of votes for ’rabbit’.>java Client list’rabbit’ has 1 vote(s).’squirrel’ has 0 vote(s).3> java Client vote duckCannot find option ’duck’.Note your application does not need to follow exactly the same output as in this exam ple, as long as the requirements above are followed.4. Assessment criteria and marking processYour code will be checked using an autograder on Gradescope to test for functionality.Staff will then inspect your code and allocate the marks as per the provided markscheme below. This includes the meaningful nature (or not) of error messages outputby your submission.5. Submission requirementsRemove all extraneous files (e.g. *.class, any IDE-related files etc.). You should thenarchive your submission as follows:(a) cd to the cwk directory(b) Type cd ..(c) Type zip -r cwk.zip cwkThis creates the file cwk.zip with all of your files. Make sure you included the -roption to zip, which archives all subdirectories recursively.To check your submission follows the correct format, you should first submit using thelink Coursework: CHECK on Gradescope. Only once it passes all of the tests shouldyou then submit to the actual submission portal, Coursework: FINAL.The autograder is set up to use the standard Ubuntu image (base image version 22.04)with OpenJDK 21 installed as follows,apt-get -y install openjdk-21-jdkThis version of Java most closely matches the RedHat machines in the Bragg teachingcluster and on feng-linux.leeds.ac.uk.The following sequence of steps will be performed when we assess your submission.(a) Unzip the .zip file.(b) cd to cwk/client directory and compile all Java files: javac *.java(c) cd to cwk/server directory and do the same.(d) If there is a log.txt file on the server directory, it will be deleted.(e) To launch the server, cd to the cwk/server directory and type e.g. java Servermouse rabbit(f) To launch a client, cd to the cwk/client directory and type e.g. java ClientlistIf your submission does not work when this sequence is followed, you will lose marks.6. Academic misconduct and plagiarismAcademic integrity means engaging in good academic practice. This involves essentialacademic skills, such as keeping track of where you find ideas and information andreferencing these accurately in your work.4By submitting this assignment you are confirming that the work is a true expressionof your own work and ideas and that you have given credit to others where their workhas contributed to yours.There is a three-tier traffic light categorisation for using Gen AI in assessments. Thisassessment is amber category: AI tools can be used in an assistive role. Use commentsin your code to declare any use of generative AI, making clear what tool was used andto what extent.Code similarity tools will be used to check for collusion, and online source code siteswill be checked.7. Assessment/marking criteria gridThis coursework will be marked out of 30.11 marks : Basic operation of the Server application, including use of threadpool and log file output.11 marks : Implementation of the list and vote commands.4 marks : Meaningful error messages.4 marks : Sensible code structure with good commenting.Total: 305

$25.00 View

[SOLVED] Ai6126 project 1  celebamask face parsing

Project 1 SpecificationFace parsing assigns pixel-wise labels for each semantic components, e.g., eyes, nose, mouth. The goal of this mini challenge is to design and train a face parsing network. We will use the data from the CelebAMask–HQ Dataset [1] (See Figure 1). For this challenge, we prepared a mini-dataset, which consists of 1000 training and 100 validation pairs of images, where both images and annotations have a resolution of 512 x 512.The performance of the network will be evaluated based on the F-measure between the predicted masks and the ground truth of the test set (the ground truth of the test set will not be released). Figure 1. Sample images in CelebAMask-HQ We will evaluate and rank the performance of your network model on our given 100 unseen test images based on the F-measure. The higher the rank of your solution, the higher the score you will receive. In general, scores will be awarded based on the Table below. Notes:  sum(p.numel() for p in model.parameters())to compute the number of parameters in your network.  We host the validation and test sets on CodaBench. Please follow the guidelines to ensure your results to be recorded.The website of the competition is https://www.codabench.org/competitions/5726  IMPORANT NOTE Please refer “Get Started → Submission” on the CodaBench page to for the file structure of your submission. Please adhere to the required file structure. Submissions that do not follow the structure cannot be properly evaluated, which may affect your final marks. If your submission status is “failed”, check the error logs to identify the issue. The evaluation process may take a few minutes.  You can use the computational resources assigned by the MSAI course. Alternatively, you can use Google CoLab for computation Interactive Facial Image Manipulation, CVPR 2020 

$25.00 View

[SOLVED] Assignment 2: cmpt 371

1) Consider a host running a IPv4 resolver and a local DNS server. Do not consider IPv6 AAAA records. Assume the domain names given in A, B and C are hosts (not DNS servers). Assume the following domains are delegated by the DNS server for the nodes above them. ca. com. gov. postulates.ca. seas.com. mammal.gov. integers.postulates.ca. beaches.seas.com.Assume all other domains are not delegated. Both the host and the server have recently been rebooted. The server has made three queries since it was rebooted. The queries were for the A records for each of the following hosts. The hosts (not running DNS servers) are listed in the order the queries are made. i. numbers.integers.postulates.ca ii. landscapes.beaches.seas.com. iii. lion.cat.mammal.gov.Answers should all be in the form of DNS records for example  NS record for each DNS server for domain fresh.fruit.gov.  A record for each DNS server for domain favorit.color.ca.  AAAA record for fish.shark.seafood.com.a) [4 points] What DNS records would you expect to find in the cache of the local DNS server after the queries to resolve the address for numbers.integers.postulates.ca.? b) [6 points] Consider each of the following hosts. For each host answer the following two questions. Which DNS server would be authoritative for each of the following hosts?  Why is that DNS server you chose authoritative for the host? I. Tiger.cat.mammal.gov. II. Colorful.sunsets.beaches.seas.com. III. Arithmetic.integers.postulates.ca.c) [6 points] After the three queries above for hosts i, ii, and iii, an additional query for host wolf.mammal.gov. is made. I. [2 points] What is the first DNS server queried? Why? II. [2 points] What DNS record is the query requesting, in the query to the DNS server in 1? III. [2 points] What is returned by DNS query in II?d) [4 points] For each of the following two domains you will state which DNS server would be contacted first and the number of iterative queries needed to obtain an answer to the query. I. After the query for wolf.mammal.gov. the next query was for sandy.beaches.seas.com.II. After the query for sandy.beaches.seas.com the next query was for a.b.c.d.edu 2) [15 points] Consider bit torrent as an example of a peer to peer application. The host labeled X is has just joined the illustrated bit torrent. As indicated below the host it is the newest host in the torrent. Answer the following questions. Each answer should be no longer than three sentences.a) How does host X obtain an initial list of potential peers? b) What is the purpose of the tracker? c) How does a potential peer from X’s list become a neighboring peer? d) What does it mean to unchoke a peer? e) What does it mean for a peer to be optimistically unchoked?X NEWEST HOST IN TORRENT tracker:3) [50 points] You will write two Python (compatible with v3.5) socket programs to implement a slightly modified version of the protocol (rdt3.0) from your text (that we will/have discussed in class. These two processes will exchange simple packets that are used to implement the protocol. Please note that the form of the packets has been changed from the example in your textbook.This means that the arguments of the makepkt function in the figures must be changed to a list of the variables in the fields of the packets described below. The arguments of makepkt should be in the order Field1, Field2, Field3, Field4 Your programs must run on the Linux machines in the CSIL labs. If your programs do not run in the Linux environment in CSIL you will receive a 0 for this problem. Beware socket programs are often not easily portable between operating systems.The contents of the simple packets you send between your sender (client) and receiver (server) sockets will include 4 values.  Field 1: Packet contents: a 32 bit integer (data). The integer cannot be assumed to follow any pattern Packet contents for an ACK or NACK packets is 0  Field 2: Sequence Number for Packet: a Boolean Value is True if sequence number is 1 Value is False if sequence number is 0 Field 3: Is this an ACK? Boolean True if the packet is an ACK, False if the packet is not an ACK  Field 4: Sequence Number for ACK: a Boolean Value is True if sequence number is 1 Value is False if sequence number is 0 In your sender program you will Use one instance of a pseudo random number generator random( ) to produce uniformly distributed random floating point numbers in the range [0.0, 5.0). These pseudo random numbers will be used to simulate random arrival time of packets. When a packet is sent, the send is followed immediately by a call to this random number generator. The pseudo random number returned will be interpreted as the delay in seconds before the next packet will be generated. This delay occurs after the process enters either “wait for call 0 from above” or “wait for call 1 from above”. It is the time between entering one of these states and building the next packet (waiting for enough data for a packet). Use a second instance of a pseudo random number generator random() to generate random number that will be used to determine if an ACK and NACK that has just arrived has been corrupted. This instance of the pseudo random number generator random should generate uniformly distributed pseudo random numbers between [0.0 and 1.0). If the number generated is less that the input value of the probability that an ACK packet has been be corrupted, then the ACK packet that has just arrived will be considered to be corrupted. Read the values of the following quantities used in your program at the start of your program. You will not read any quantities not in this list into your program. You will read the specified quantities in the order they are specified below. o The seed for the random number generator used for timing o The number of packets to send o The seed for the random number generator used for determining if ACKs or NACKshave been corrupted.o The probability that an ACK has been corrupted. Between [0 and 1) o The round trip travel time (to be used for timer for determining if a packet has not received an ACK) Print three messages immediately before a packet is sent. o You must print the messages exactly as given (character by character), with the sole exception of items in bold which will contain actual data values.o First message: one of the four following messages should be printed  A packet with sequence number 0 is about to be sent  A packet with sequence number 1 is about to be sent  A packet with sequence number 0 is about to be resent  A packet with sequence number 1 is about to be resento Second message should be printed  Packet to send contains: data = 123 seq = 0 isack=False ack = 0o Third message: one of the two following messages should be printed.  Starting timer for ACK0  Starting timer for ACK1  Print the following messages immediately after an uncorrupted ACK packet is received.o An ACK will be received. Print the message for the packet received followed by the third message. o You must print the messages exactly as given below, with the sole exception of items in bold which will contain actual data values. o First message: one of the two following messages should be printed. An ACK0 packet has just been received  An ACK1 packet has just been received o Second message should be printed  Packet received contains: data = 0 seq = 1 isack= False ack = 0 o Third message: one of the two following messages should be printed.  Stopping timer for ACK0  Stopping timer for ACK1 Print the following message immediately after a corrupted packet is received. o You must print the message exactly as given below.  A Corrupted ACK packet has just been received  Print the following messages if a timer expires.There are two reasons the timer may expire. For a lost packet/ack the best way to handle this timeout is using the timeout associated with the socket recv().For a corrupted ack or an incorrect sequence number something will be received before the timeout. You will need to check what is received, if it is a corrupted ack or incorrect ack you will need to monitor the time until the timer expires then send your next packet. You must print the message exactly as given below.o One of the two following messages should be printed.  ACK0 timer expired  ACK1 timer expired Print the following messages immediately before the sender moves to another state or returns to the same state. o Only the appropriate one of the following messages should be printed for each transition.o These messages must be printed exactly as given below.  The sender is moving to state WAIT FOR CALL 0 FROM ABOVE  The sender is moving to state WAIT FOR CALL 1 FROM ABOVE  The sender is moving back to state WAIT FOR CALL 0 FROM ABOVE  The sender is moving back to state WAIT FOR CALL 1 FROM ABOVE The sender is moving to state WAIT FOR ACK0  The sender is moving to state WAIT FOR ACK1  The sender is moving back to state WAIT FOR ACK0  The sender is moving back to state WAIT FOR ACK1In your receiver program you will  Use an instance of a pseudo random number generator random() to generate uniformly distributed pseudo random numbers between [0.0 and 1.0). These pseudo random numbers will be used to determine if a packet that has just arrived has been corrupted. If the pseudo random number generated is less that the input value of the probability that the packet has been be corrupted, then the packet that has just arrived will be considered to be corrupted. Read the values of the following quantities used in your program at the start of your program. You will not read any quantities not in this list into your program. You will read the specified quantities in the order they are specified below. o The seed for the random number generator used for determining if packets have been corrupted. o The probability that a packet has been corrupted Print the following messages immediately before an ACK is sent. o One of the first two messages should be sent followed by the third message. o You must print the messages exactly as given, with the sole exception of items in bold which will contain actual data values. An ACK0 is about to be sent  An ACK1 is about to be sent  Packet to send contains: data = 0 seq = 0 isack = True ack = 0 Print the following messages immediately after an uncorrupted packet is received. o One of the first four messages should be sent followed by the fifth message, including the actual contents of the packet just received o You must print the messages exactly as given, with the sole exception of items in bold which will contain actual data values. A packet with sequence number 0 has been received  A packet with sequence number 1 has been received  A duplicate packet with sequence number 0 has been received  A duplicate packet with sequence number 1 has been received  Packet received contains: data 333 seq = 0 isack = True ack = 1 Print the following message immediately after a corrupted packet is received.You must print the message exactly as given.  A Corrupted packet has just been received  Print the following messages Immediately before the sender moves to another state or returns to the same stateo the appropriate one of the following messages should be printed  The receiver is moving back to state WAIT FOR 0 FROM BELOW  The receiver is moving back to state WAIT FOR 1 FROM BELOW  The receiver is moving to state WAIT FOR 0 FROM BELOW  The receiver is moving to state WAIT FOR 1 FROM BELOW4) Consider a link with the following properties:  75-meter link length  Bit rate of 5Mibps (1Mib = 220 bits) in each direction.  Propagation velocity through the link is 2.5*108 m/s.  Size of one HTTP response (containing one HTTP object) is 80Kibits bits (1Kibit = 210 `bits) including headers.  Packets containing HTTP requests, SYNs, ACKs, and FINs are 1024 bits long including headers.Assume that a request is made for a particular webpage. The initial response is a single object. While processing that single object 13 additional objects are requested. Consider how long it would take to obtain all 14 of these objects in each of the following scenarios. Include the time used by both the 3 way handshake to establish the TCP connection and the 3 way handshake to close the TCP connection.a) [5 points] Assume that each object is requested using a non-persistent HTTP connection. Assume that only one non-persistent connections can be in use at any given time. Each non-persistent connection has a rate of 5Mibps. The first FIN sent is a separate packet (no piggybacked on an object or request).b) [5 points] Assume a single persistent HTTP connection is used for all 14 objects. No pipelining is used.

$25.00 View

[SOLVED] Assignment 3: cmpt 371

1) [24 points] Refer to the TCP state machine in the class notes (given below). Assume that a TCP connection between stations A(client) and B(server) has been in use. This connection was established using an active-passive open.Explain how the connection can be closed using an active close initiated by the client. To help you explain draw the series of segments exchanged during the active close for each of the three possible paths through the state machine (from stat ESTAB to state CLOSED).To indicate the type of diagram desired a SAMPLE DIAGRAM of the desired type is shown below the state machine. If you happen to be interested the sample diagram is for and active-active (peer-peer) open SAMPLE DIAGRAM A B A sends SYN and Piggyback ACK to B’s SYN A sends SYN B sends SYN B sends SYN and Piggyback ACK to A’s SYNBoth stations start in state CLOSEDBoth stations move to state SYN SENTBoth stations move to state SYN Both stations move to state ESTABA receives ACK of its own FIN B receives ACK of its own FIN2) Consider the figure shown below. Assuming TCP (Reno) is using congestion control (slow start and congestion avoidance modes as discussed in your text and in class) answer the following questions.Assume the system was running before the sample window length data shown in the plot below was collected. Assume the first value of ssthresh shown on the diagram is 64.Point 1 is asumed to be at the beginning of interval 1. If a failure that causes transition between modes occurs between two points the first point is considered to be part of the pre-failure mode, the second point to be part of the post failure mode. For other transitions between modes (slow start, congestion avoidance) one point is a part of both modes. It is both the end of one mode and the beginning of the other. For example point 13 below would be part of both modes.For each question you should provide a short discussion justifying your answer. a) [4 points] Identify the periods of time when TCP collision avoidance is operating b) [4 points] Identify the periods of time when TCP slow start is operating c) [3 points] During the 6th transmission round, is segment loss detected by a triple duplicate ACK or by a timeout? What is the value of ssthreshd) [3 points] During the 22nd transmission round, is segment loss detected by a triple duplicate ACK or by a timeout?e) [10 points] What is the value of ssthresh and the size of the congestion window during the 4th transmission round? During the 16th transmission round? During the 27th round? During the 35th round? During the 39nd round? NOTE: The value of the size of the congestion window during the 4th transmission round is the value at the beginning of the 4th transmission round. 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 951 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45CONGESTION WINDOW SIZE (SEGMENTS) TRANSMISSION ROUND Explain how the values of ssthresh and the congestion window size are determined at each change in the value of ssthresh. The first value of ssthresh (at point 1) is given and need not be discussed.round Ssthresh Congestion window size 4 16 27 35 39f) [2 points] During what transmission round is the 270th segment sent? The 1300th segment? Include the segments sent in transmission round 1.3) Consider two hosts transferring data using a TCP connection. Assume the connection between hosts A and B has already been made. The establishment of the TCP connection is not a part of this problem. Host A is sending a stream of application data to host B. The first octet of data A is sending to B in the transfer of data illustrated below is octet 3453. Each packet sent by A contains 550 octets of data. Host B is sending a different stream of application data to host A. The first octet of data B is sending to A in the transfer of data illustrated below is 7777. Each packet B is sending to A contains 400 octets of data.a) [15 points] Fill in the sequence numbers and acknowledgement number on the diagram below. b) [5 points] What are the two TCP error control control mechanisms shown below? These are the mechanisms that help recover from loss of packets or ACKs.c) [10 points] Give a step by step description of how the first mechanism you identified in b) operates using the diagram as an example to help in your explanation. d) [10 points] Give a step by step description of how the second mechanism you identified in b) operates using the diagram as an example to help in your explanation.SEQ NUM ACK NUM SEQ NUMACK NUM3453 7777 A B4) Consider the CIDR routing table shown below. Destination Gateway Mask Interface 192.168.48.0 * 255.255.240.0 eth1 Line 1 192.168.4.0 * 255.255.254.0 eth2 Line 2 192.168.0.0 * 255.248.0.0 eth3 Line 3 120.124.160.0 192.168.0.2 255.255.224.0 eth3 Line 4 192.156.32.0 128.168.48.1 255.255.255.0 eth1 Line 5 0.0.0.0 192.168.200.12 0.0.0.0 eth4 Line 6a) [5 points] Is the forwarding table (routing table) above optimized so that the first match found is the “best” match? Explain why or why notb) [15 points] For each address in the table below which line in the routing table on the previous page is used, which interface is the packet sent through, and what the IP address of the host the packet will be sent to in the Ethernet layer? Fill in the table below Destination address Line in forwarding table Interface Next hop IP address 192.168.5.55 192.168.6.3 192.168.55.12 192.156.33.1 120.124.160.12

$25.00 View

[SOLVED] Assignment 4: cmpt 371

1) [19 points] A source host sends a packet with an MTU of 1500 octets in an Ethernet frame. The MTU or maximum transmission unit indicates the length of the data field in the Ethernet frame (the length of the IP packet). The length of the IPv4 header is 32 octets. The length of the TCP header is 24 octets. On its way to the destination the packet passes through a network with a MTU of 920 octets.Explain how the packet is fragmented by filling in the requested information in the diagram below. You should create a copy of the diagram below including the information requested in your solution. The data you are to fill in is indicated in two waysa. A space after an = needs to be filled with a numeric value b. A ? needs to be replaced with a label indicating the type of header and its length in octetsc. A %% indicates that the field should hold the length of the application data (without any encapsulating headers). In addition to the final answer you should provide an equation showing how that length was calculated (either words or just an expression showing how you combined the supplied values to determine the length. ). Remember the payload of the IP datagram for each fragment (except the last) must be a multiple of 8.Consider what would change in your calculation if the MTU was increased to 927. Does the amount of data and / or the offset in the first fragment change when the MTU is increased from 920 to 927? Give an explanation, including a calculation, of why the amount of data changes (or does not change) and why the offset changes (or does not change).Original Ethernet frame before fragmentation Ethernet frames containing IP fragments after fragmentation ? ? ? %% ? %% ? ? %% ? ? MTU= bytes actual size= octets MTU= bytes, actual size= octets More= Offset= octets More= Offset= octets = (value in header field)2) Consider routing within an AS (autonomous system). Answer the following questions regarding routing protocols. Each answer should be no more than two sentences per point. a) [2 points] What is an AS? b) [1 point] Where is an internal routing protocol used?c) [1 point] Where is an external routing protocol used? d) [2 points] What is a distance vector? e) [1 point] When using a link state type protocol what routing information is exchanged? f) [2 points] Consider a router A in an AS. When using a link state type protocol which routers send routing information to Router A? Which routers does Router A send routing information to?g) [2 points] One of the problems with distance vector based routing is slow convergence. What is slow convergence? h) [3 points] Is RIP a link state routing protocol? How does RIP mitigate (reduce) the effects of slow convergence?i) [2 points] What problem does RIP use triggered updates to mitigate? Briefly explain how triggered updates mitigate this problem. j) [1 point] When using a link state protocol what routing information is exchanged?k) [4 points] Give an example of a link state protocol discussed in class. What method does this protocol used to share routing information between routers? Give a two to three sentence summary of this method.3) [14 points] Consider the distributed Bellman-Ford algorithm used in the first generation internet. At station A, new routing tables have just arrived from A’s nearest neighbors B and D. The cost from A to B is 6 and the cost from A to D is 4. These newly received distance vectors are given below. Based on these newly received distance vectors calculate a new distance vector for node A. New table Cost Cost Cost Next A – – – – B C D E F G Hfrom B from D Cost Next Cost Next A 6 A 2 A B – – 7 G C 3 C 6 G D 8 A – – E 2 E 5 G F 10 C 13 G G 3 E 4 G H 7 E 8 G4) [20 points] Consider a system using flooding with a hop counter. Suppose that the hop counter is originally set to the diameter of the network. When the hop count reaches zero, the packet is discarded except at its destination. Does this always insure that a packet will reach its destination if the case that there exists at least one operable path? Why or why not? Give an example or counter example.5) A CRC is constructed to generate a 8 bit Frame Check sequence for a 19 bit message. The generator polynomial is ( ) 1 8 7 4 3 P X  X  X  X  X  X  , The message bits for a particular message are 1 1 0 0 1 1 0 0 0 0 1 1 1 0 1 0 1 1 1a) [4 points] Draw a shift register circuit to perform the calculation of the CRC bits. b) [4 points] List four examples of the types of errors an FCS can detect.a) [6 points] [6 points] Can the errors represented by each of the following error polynomials E(X) be detected by the CRC? Why or why not? 0010000100000010000 0000000010101100000 0001000111111000000 c) [7 points] Determine the FCS using polynomial division. Show your work d) [9 points] Determine the FCS using your shift register circuit. Show your work.

$25.00 View