Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] Cs6601 assignment 6: hidden markov models

Overview Hidden Markov Models are used extensively in Artificial Intelligence, Pattern Recognition, Computer Vision, and many other fields. If a system has unobservable (hidden) states and each state is independent of the prior, then we can create a model of that system using probability distributions over a sequence of observations. The idea is that we can provide this system with a series of observations to use to query what is the most likely sequence of states that generated these observations.### The FilesYou will only have to edit and submit **submission.py**, but here are all the notable files: 1. **notebook.ipynb**: Optional Jupyter notebook to complete the assignment. 2. **submission.py**: Where you will ultimately put your probabilities and viterbi trellis. 3. **hmm_submission_tests.py**: Local test file. Due to static nature of the trellis values, local tests are extremely limited. Please do not share values or probabilities with other students if you create your own tests.## Submission All submissions will be via Gradescope. If you’re completing this assignment in Jupyter Notebook, you must run the `notebook2script.py` file to export your work to a python file. To generate your submission file, run the command`python notebook2script.py submission`:and your file will be created under the `submission` directory.Upload the resulting `submission.py` file to the Assignment 6A assignment on Gradescope for feedback.#### IMPORTANT: A total of 10 submissions is allowed for this assignment. Please use your submissions carefully and do not submit until you have thoroughly tested your code locally.#### If you’re at 9 submissions, use your tenth and last submission wisely. The submission marked as ‘Active’ in Gradescope will be the submission counted towards your grade.### Resources 1. Canvas Lectures on Pattern Recognition Through Time (Lesson 8) 2. Challenge Questions on Piazza### Local Testing If you are using `submission.py` to complete the assignment instead of the Jupyter Notebook, you can run the tests using:`python hmm_submission_tests.py`This will run all unit tests for the assignment, comment out the ones that aren’t related to your part (at the bottom of the file) if going step by step.## The Assignment The goal of this assignment is to demonstrate the power of probabalistic models. You will build a word recognizer for American Sign Language (ASL) video sequences. In particular, this project employs [hidden Markov models (HMM’s)](https://en.wikipedia.org/wiki/Hidden_Markov_model) to analyze a series of measurements taken from videos of American Sign Language (ASL) collected for research (see the [RWTH-BOSTON-104 Database](http://www-i6.informatik.rwth-aachen.de/~dreuw/database-rwth-boston-104.php)).In each video, an ASL signer is signing a meaningful sentence. In a typical ASL recognition system, you observe the XY coordinates of the speaker’s left hand, right hand, and nose for every frame. The following diagram shows how the positions of the left hand (Red), right hand (Blue), and nose (Green) change over time in video number #66. Saturation of colors represents time elapsed.In this assignment, for the sake of simplicity, you will only use the Y-coordinates of each hand to construct your HMM. In Part 1 you will build a one dimensional model, recognizing words based only on a series of right-hand Y coordinates; in Part 2 you will go multidimensional and utilize both hands. At this point, you will have two observed coordinates at each time step (frame) representing right hand & left hand Y positions.The words you will be recognizing are “BUY”, “HOUSE”, and “CAR”. These individual signs can be seen in the sign phrases from our dataset: JOHN CAN BUY HOUSE JOHN BUY CAR [FUTURE] ### Part 1a: Encoding the HMM _[15 Points]_Follow the method described in Canvas **Lecture 8: 29. HMM Training** to determine following values for each word: 1. the transition probabilities of each state 2. the mean & standard deviation of emission Gaussian distribution of each stateUse the training samples from the table below. Provide the transition and prior probabilities as well as the emission parameters for all three words with **accuracy to 3 decimal digits**.Round the values to 3 decimal places thoughout entire assignment: – 0.1 stays 0.1 or 0.100 – 0.1234 rounds to 0.123 – 0.2345 rounds to 0.235 – 0.3456 rounds to 0.346 – 0.0123 rounds to 0.012 – 0.0125 rounds to 0.013Those values can be hardcoded in your program. Don‘t use round() from python.Word | Frames | Observed sequence | Initial State1 | Initial State2 | Initial State3 — | — | — | — | — | — BUY | 6 | 36, 44, 52, 56, 49, 44 | 36, 44 | 52, 56 | 49, 44 BUY | 8 | 42, 46, 54, 62, 68, 65, 60, 56 | 42, 46, 54 | 62, 68, 65 | 60, 56 BUY | 10 | 42, 40, 41, 43, 52, 55, 59, 60, 55, 47 | 42, 40, 41|43, 52, 55|59, 60, 55, 47 CAR | 10 | 47, 39, 32, 34, 36, 42, 42, 42, 34, 25|47, 39, 32|34, 36, 42|42, 42, 34, 25 CAR | 9 | 35, 35, 43, 46, 52, 52, 56, 49, 45|35, 35, 43|46, 52, 52|56, 49, 45 CAR | 8 | 28, 35, 46, 46, 48, 43, 43, 40|28, 35, 46|46, 48, 43|43, 40 HOUSE| 15 | 37, 36, 32, 26, 26, 25, 23, 22, 21, 39, 48, 60, 70, 74, 77|37, 36, 32, 26, 26|25, 23, 22, 21, 39|48, 60, 70, 74, 77 HOUSE| 15 | 50, 50, 49, 47, 39, 39, 38, 38, 50, 56, 61, 67, 67, 67, 67|50, 50, 49, 47, 39|39, 38, 38, 50, 56|61, 67, 67, 67, 67 HOUSE| 16 | 45, 43, 44, 43, 40, 35, 36, 37, 39, 45, 60, 68, 66, 72, 72, 75|45, 43, 44, 43, 40|35, 36, 37, 39, 45|60, 68, 66, 72, 72, 75As shown in the diagram below, each one of the three words (BUY, CAR, and HOUSE) has exactly **THREE hidden states** in its HMM. All words must start from State 1 and can only transit to the next state or stay in the current one.### _Training sequences need to have 3 hidden states no matter what!_ If you follow the HMM training procedure described in Canvas, you might encounter a situation where a hidden state is **_squeezed_** out by an adjacent state; that is, a state might have its only observation moved to another state. In that situation, always keep at least one observation for that hidden state.Example: Assume you’ve reached a stage where the following is true: – State 1 has mean=53 & std=6 – State 2 has mean=37 & std=9 – State 3 has mean=70 & std=8The next training sample has the following observed sequence:`45 45 34 | 30 30 25 36 52 | 62 69 74`and you are trying to adjust the location of state boundary between State 1 & 2. You first move it 1 step to the left since 34 is closer to State 2, and then you realize that 45 is still closer to State 2. If you follow the same routine, you will end up with no obvervation for State 1. In order to prevent this from happening, you have to stop at the last “45” and as a result leave the boundary as`45 | 45 34 30 30 25 36 52 | 62 69 74`Now you meet the ‘3 hidden states per sample’ requirement.### Some hints/guidelines for training #### How should we compare if an observation if closer to one state or another? Check how many standard deviations away is the observation from the mean for each state. Example: Say 46 is the rightmost observation in S1. If we denote the mean and std of state i as μi,σi, then should we be comparing |46−μ1| / σ1 vs |46−μ2| / σ2#### For HMM training, which side of the boundary should we check first while assigning observed sequence values to states? After computing the mean and std for each state, adjust the boundary between the states. Always start from the 1st element at the LEFT side of the boundary. If the LEFT element is closer to the next state, then move the boundary leftward. If the LEFT element should stay at the current state, then check the RIGHT element. This is just done to make sure that everyone gets the same results in the context of the assignment.#### Functions to complete: 1. `part_1_a()`—### Part 1b: Creating the Viterbi Trellis _[40 Points]_The goal here will be to use the HMM derived from Part 1a (states, prior probabilities, transition probabilities, and parameters of emission distribution) to build a Viterbi trellis. When provided with an evidence vector (list of observed right-hand Y coordinates), the function will return the most likely sequence of states that generated the evidence and the probabilty of that sequence being correct.For example, an evidence vector [36, 44, 52, 53, 49, 44] should output a sequence [‘B1’, … ‘B2’, … ‘B3’]If no sequence can be found, the algorithm should return one of the following tuples: `(None, 0)` (null), `([], 0)` (empty list) or `([‘C1’, ‘C1’, … ‘C1’],0)` (Or all being the first state of that letter)“No sequence can be found” means the probability reaches 0 midway. If you find an incomplete sequence with some probability, output that sequence with its probability.#### Functions to complete: 1. `viterbi()`#### Hint: In order to reconstruct your most-likely path after running Viterbi, you’ll need to keep track of a back-pointer at each state, which directs you to that state’s most-likely predecessor.You are asked to use the provided function `gaussian_prob` to compute emission probabilities. In a typical HMM model you have to convert the probability to log-base in order to prevent numerical underflow, but in this assignemnt we will only test your function against a rather short sequence of observations, so **DO NOT** convert the probability to logarithmic probability or you will fail on Gradescope.#### Gradescope: In the autograder, we will also test your code against other `evidence_vectors`.—-### Part2a: Multidimensional Output Probabilities _[6 Points]_In Part 1a, we use only right-hand Y-axis coordinates as our feature, and now we are going to use both hands. Since ASL is two handed, using observations from both the right and left hands as features can increase the accuracy of our model when dealing with more complex sentences.Here you are given the transition probabilities and the emission parameters of left-hand Y-axis locations, following the same procedure conducted in Part 1a.One thing to notice is, in Part 1, the `viterbi` function is tested against single words. That is, the input evidence vector will not transit between different words. However, for Part 2, the input evidence vector can be either a single word, or a verb phrase such as “BUY CAR” and “BUY HOUSE”. Adjust the given transition probabilities to adapt to this fact.*NOTE: Add NEW keys to the transition dictionary ONLY if there is a NON-ZERO transition probability*BUY | State 1 | State 2 | State 3 — | — | — | — Mean | 108.200 | 78.670 | 64.182 Std | 17.314 | 1.886 | 5.573CAR | State 1 | State 2 | State 3 — | — | — | — Mean | 56.300 | 37.110 | 50.000 Std | 10.659 | 4.306 | 7.826HOUSE | State 1 | State 2 | State 3 — | — | — | — Mean | 53.600 | 37.168 | 74.176 Std | 7.392 | 8.875 | 8.347#### Functions to complete: 1. `part_2_a()`—### Part 2b: Improving the Viterbi Trellis _[39 Points]_Modify the Viterbi trellis function to allow multiple observed values (Y location of right and left hands) for a state. The return format should be identical to Part 1b.#### Functions to complete: 1. `multidimensional_viterbi()`#### Gradescope: In the autograder, we will also test your code against other `evidence_vectors`.—**CONGRATULATIONS!** You have just completed your final assignment for CS6601 Artificial Intelligence. 

$25.00 View

[SOLVED] Expectation maximization – assignment 5 – cs6601

SetupClone this repository:`git clone https://github.gatech.edu/omscs6601/assignment_5.git`Please use the same environment from previous assignments by running“` conda activate ai_env “`#### Jupyter Notebook: You will be using **jupyter notebook** to complete this assignment.To open the jupyter notebook, navigate to your assignment folder, (activate your environment if you have/using one), and run `jupyter notebook`.Project description and all of the functions required to implement you will find in the `solution.ipynb` file.**ATTENTION:** You are free to add additional cells for debugging your implementation, however, please don’t write any inline code in the cells with function declarations, only edit the section *inside* the function, which has comments like: `# TODO: finish this function`.## GradingThe grade you receive for the assignment will be distributed as follows:1. k-Means Clustering (19 points) 2. Gaussian Mixture Model (48 points) 3. Model Performance Improvements (20 points) 4. Bayesian Information Criterion (12 points) 5. Return your name (1 point)## Submission The tests for the assignment are provided in `mixture_tests.py`. All the tests are already embedded into the respective ipython notebook cells, so they will run automatically whenever you run the cells with your code. Local tests are sufficient for verifying the correctness of your implementation. The tests on Gradescope will be similar to the ones provided here.To get the submission file, make sure to save your notebook and run:`python notebook2script.py submit`Once the execution is complete, open autogenerated `submit/submission.py` and verify that it contains all of the imports, functions, and classes you are required to implement. Only then proceed to the [Gradescope](https://www.gradescope.com/) for submission.In your Gradescope submission history, you can mark certain submissions as **Active**. Please ensure this is your best submission.#### Do NOT erase the #export at the top of any cells as it is used by notebook2script.py to extract cells for submission.#### You will be allowed 3 submissions every 3 hours on gradescope. Make sure you test everything before submitting it. The code will be allowed to run for not more than 40 minutes per submission. In order for the code to run quickly, make sure to vectorize the code (more on this in the notebook itself).## Resources1. Canvas lectures on Unsupervised Learning (Lesson 7) 2. The `gaussians.pdf` in the `read/` folder will introduce you to multivariate normal distributions. 3. A youtube video by Alexander Ihler, on multivariate EM algorithm details: [https://www.youtube.com/watch?v=qMTuMa86NzU](https://www.youtube.com/watch?v=qMTuMa86NzU) 4. The `em.pdf` chapter in the `read/` folder. This will be especially useful for Part 2 of the assignment.

$25.00 View

[SOLVED] Cs 6601  assignment 4 – decision trees and multiclass-classification

Overview Machine Learning is a subfield of AI, and Decision Trees are a type of Supervised Machine Learning. In supervised learning an agent will observe sample input and output and learn a function that maps the input to output. The function is the hypothesis ”’ y = f(x)”’. To test the hypothesis we give the agent a *test set* different than the training set. A hypothesis generalizes well if it correctly predicts the y value. If the value is *finite*, the problem is a *classification* problem, if it is a *real number* it is considered a *regression* problem. When classification problems have exactly two values (+,-) it is Boolean classification. When there are more than two values it is called Multi-class classification. Decision trees are relatively simple but highly successful types of supervised learners. Decision trees take a vector of attribute values as input and return a decision. [Russell, Norvig, AIMA 3rd Ed. Chptr. 18] ## Submission and Due DateThe deliverable for the assignment is to upload a completed **_submission.py_** to Gradescope.* All functions must be completed in **_submission.py_** for full credit**Important**: Submissions to Gradescope are rate limited for this assignment. **You can submit two submissions every 60 minutes during the duration of the assignment**.Since we want to see you innovate and imagine new ways to do this, we know this can also cause you to fail (spectacularly in my case) For that reason you will be able to select your strongest submission to Gradescope. In your Gradescope submission history, you will be able to mark your best submission as ‘Active’. This is a students responsibility and not faculty.### The FilesYou are only required to edit and submit **_submission.py_**, but there are a number of important files: 1. **_submission.py_**: Where you will build your decision tree, confusion matrix, performance metrics, forests, and do the vectorization warm up. 2. **_decision_trees_submission_tests.py_**: Sample tests to validate your trees, learning, and vectorization locally. 3. **_visualize_tree.ipnb_**: Helper Notebook to help you understand decision trees of various sizes and complexity 4. **_unit_testing.ipynb_**: Helper Notebook to run through tests sequentially along with the readme### Resources * Canvas *Thad’s Videos*: [Lesson 7, Machine Learning](https://gatech.instructure.com/courses/225196/modules/items/2197076) * Textbook:Artificial Intelligence Modern Approach * Chapter 18 Learning from Examples * Chapter 20 Learning Probabilistic Models * [Cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) * [K-Fold Cross-validation](http://statweb.stanford.edu/~tibs/sta306bfiles/cvwrong.pdf)### Decision Tree Datasets 1. **_hand_binary.csv_**: 4 features, 8 examples, binary classification (last column) 2. **_hand_multi.csv_**: 4 features, 12 examples, 3 classes, multi-class classification (last column) 3. **_simple_binary.csv_**: 5 features, 100 examples, binary classification (last column) 4. **_simple_multi.csv_**: 6 features, 100 examples, 3 classes, multi-class classification (last column) 5. **_mod_complex_binary.csv_**: 7 features, 1600 examples, binary classification (last column) 6. **_mod_complex_multi.csv_**: 10 features, 2400 examples, 5 classes, multi-class classification (last column) 7. **_complex_binary.csv_**: 10 features, 2800 examples, binary classification (last column) 8. **_complex_multi.csv_**: 16 features, 4800 examples, 9 classes, multi-class classification (last column) 9. **_part23_data.csv_**: 4 features, 1372 example, binary classification (last column) * Not provided, but will have less class separation and more centroids per class. Complex sets given for development * **_challenge_binary.csv_**: 10 features, 5400 examples, binary classification (last column) * **_challenge_multi.csv_**: 16 features, 10800 examples, 9 classes, multi-class classification (last column) #### NOTE: path to the datasets! ‘./data/your_file_name.csv’#### Warmup Data **_vectorize.csv_**: data used during the vectorization warmup for Assignment 4#### Imports **NOTE:** We are only allowing four imports: numpy, math, collections.Counter and time. We will be checking to see if any other libraries are used. You are not allowed to use any outside libraries especially for part 4 (challenge). Please remember that you should not change add or change any input parameters other than in part 4.#### Rounding **NOTE:** Although your local tests will have some rounding, it is meant to quickly test your work. Overall this assignment follows the CI 6601 norm of rounding to 6 digits. If in doubt, in use round: “` x = 0.12345678 round(x, 6) Out[4]: 0.123457 “`—### Part 0: Vectorization! _[10 pts]_* File to use to benchmark tests: **_vectorize.csv_**The vectorization portion of this assignment will teach you how to use matrices to significantly increase the speed and decrease processing complexity of your AI problems. Matrix operations, linear algebra and vector space axioms and operations will be challenging to learn. Learn this, and it will benefit you throughout OMSCS courses and your career.You will not be able to meet time requirements, nor process large datasets without Vectorization Whether one is training a deep neural network on millions of images, building random forests for a U.S. National Insect Collection dataset, or optimizing algorithms, machine learning requires _extensive_ use of vectorization. The NumPy Open Source project provides the **numpy** python scientific computing package using low-level optimizations written in C. If you find the libraries useful, consider contributing to the body of knowledge (e.g. help update and advance numpy)You will not meet the time limits on this assignment without vectorization. This small section will introduce you to vectorization and one of the high-demand technologies in python. We encourage you to use any numpy function to do the functions in the warmup section. You will need to beat the non-vectorized code to get full points.TAs will offer little help on this section. This section was created to help get you ready for this and other assignments; feel free to ask other students on Ed Discussion or use some training resources. (e.g. https://numpy.org/learn/)How grading works: 1. We run your vectorized code against non-vectorized code 500 times, as long as you have the correct answer and your average time is significantly less than the average time of the non-vectorized code, you will get the points.#### Functions to complete in the `Vectorization` class: 1. `vectorized_loops()` 2. `vectorized_slice()` 3. `vectorized_flatten()` 4. `vectorized_glue()` 5. `vectorized_mask()`— ## The Assignment [Creative Commons sourced][cc] E. Thadeus Starner5th is the 5th incarnation of the great innovator and legendary pioneer of Starner Eradicatus Mosquitoes. For centuries the mosquito has imparted only harm on human health, aiding in transmission of malaria, dengue, Zika, chikungunya, CoVid, and countless other diseases impact millions of people and animals every year. The Starner Eradicatus, *Anopheles Stephensi* laser zapper has obtained the highest level of precision, recall, and accuracy in the industry! [Creative Commons sourced][cc] The secret is the classification engine which has compiled an unmatched library of classification data collected from 153 countries. Flying insects from the tiny Dicopomorpha echmepterygis (Parasitic Wasp) to the giant titanus giganteus (Titan Beetle) are carefully catalogued in a comprehensive digital record and indexed to support fast and correct classification. This painstaking attention to detail was ordered by A. Thadeus1st to address a tumultuous backlash from the International Pollinators Association to a high mortality among beneficial pollinators. [Creative Commons sourced][cc] E. Thadeus’ close friend Skeeter Norvig, a former CMAO (Chief Mosquito Algorithm Officer) and pollinator advocate has approached E.T. with an idea. The agriculture industry has been experiencing terrible losses worldwide due to the diaphorina citri (Asian Citrus Psyllid), drosophila suzuki (spotted wing Drosophila), and the bactrocera tyron (Queensland fruit fly). [Creative Commons sourced][cc] Skeeter explains his idea to E.T. to generalize the Starner Eradicatus zapper to handle a variety of these pests. Wonderful! E.T. exclaims, and becomes wildly excited at the opportunity to bring such an important benefit to the World. [Creative Commons sources below][cc] The wheels of invention lit up the research Scrum that morning as E.T. and Skeeter storyboard the solution. People are calling out all the adjustments, wing acoustics, laser power and duration, going through xyz positioning, angular velocity and acceleration calculations, speed, occlusion noise and tracking errors. You as the lead DT software engineer are taking it all in, when you realize and speak up…, sir… Sir… SIR… and a hush falls. Sir, we are doing Boolean classification and will need to refactor to multi-class classification. E.T. turns to you and with that look in his eye, gives you and your team two weeks to deliver multi-class classification!You will build, train and test decision tree models to perform multi-class classification tasks. You will learn how decision trees and random forests work. This will help you develop an intuition for how and why accuracy differs for training and testing data based on different parameters.### Assignment Introduction For this assignment we need an explicit way to make structured decisions. The `DecisionNode` class will be used to represent a decision node as some atomic choice in a multi-class decision graph. You must use this implementation for the nodes of the Decision Tree for this assignment to pass the tests and receive credit.An object of type ‘DecisionNode’ can represent a * decision node * *left*: will point to less than or equal values of the split value, type DecisionNode, True evaluations * *right*: will point to greater than values of the split value, type DecisionNode, False evaluations * *decision_function*: evaluates an attribute’s value and maps each vector to a descendant * *class_label*: None * leaf node * *left*: None * *right*: None * *decision_function*: None * *class_label*: A leaf node’s class value * Note that in this representation ‘True’ values for a decision take us to the left.— ### Part 1: Building a Binary Tree by Hand #### Part 1a: Build a Tree _[10 Pts]_In `build_decision_tree()`, construct a decision tree capable of predicting the class (col y) of each example. Using the columns A0-A3 build the decision tree and nodes in python to classify the data with 100% accuracy. Your tests should use as few attributes as possible, break ties with equal select attributes by selecting the one which classifies the greatest number of examples correctly. For ties in number of attributes and correct classifications use the lower index numbers (e.g. select **A1** over **A2**) | X | A0 | A1 | A2 | A3 | y | | — | ——- | ——- | ——- | ——- | — | | x01 | 1.1125 | -0.0274 | -0.0234 | 1.3081 | 1 | | x02 | 0.0852 | 1.2190 | -0.7848 | -0.7603 | 2 | | x03 | -1.1357 | 0.5843 | -0.3195 | 0.8563 | 0 | | x04 | 0.9767 | 0.8422 | 0.2276 | 0.1197 | 1 | | x05 | 0.8904 | -1.7606 | 0.3619 | -0.8276 | 0 | | x06 | 2.3822 | -0.3122 | -2.0307 | -0.5065 | 2 | | x07 | 0.7194 | -0.4061 | -0.7045 | -0.0731 | 2 | | x08 | -2.9350 | 0.7810 | -2.5421 | 3.0142 | 0 | | x09 | 2.4343 | -1.5380 | -2.7953 | 0.3862 | 2 | | x10 | 0.8096 | -0.2601 | 0.5556 | 0.6288 | 1 | | x11 | 0.8577 | -0.2217 | -0.6973 | -0.1095 | 1 | | x12 | 0.0568 | 0.0696 | 1.1153 | -1.1753 | 0 |#### Requirements: The total number of elements(nodes, leaves) in your tree should be < 10#### Hints: To get started, it might help to **draw out the tree by hand** with each attribute representing a node.To create a decision function for `DecisionNode`, you are allowed to use python lambda expressions: “` func = lambda feature : feature[2] = 60% * 25 pts: average test accuracy over 10 rounds should be >= 75% Meanwhile back in the lab… As the size of our flying training set grows, it rapidly becomes impractical to build multiclass trees by hand. We need to add a class with member functions to manage this, it is too much! To do list: * Initialize the class with useful variables and assignments * Fill out the member function that will fit the data to the tree, using build * Fill out the build function * Fill out the classify functionFor starters, consider these helpful hints for the construction of a decision tree from a given set of examples: 1. Watch your base cases: 1. If all input vectors have the same class, return a leaf node with the appropriate class label. 2. If a specified depth limit is reached, return a leaf labeled with the most frequent class. 3. Splits producing 0, 1 length vectors 4. Splits producing less or equivalent information 5. Division by zero 2. Use the DecisionNode class 3. For each attribute alpha: evaluate the information gained by splitting on the attribute `alpha`. 4. Let `alpha_best` be the attribute value with the highest information gain. 5. As you progress in this assignment this is going to be tested against larger and more complex datasets, think about how it will affect your identification and selection of values to test. 6. Create a decision node that splits on `alpha_best` and split the data and classes by this value. 7. When splitting a dataset and classes, they must stay synchronized, do not orphan or shift the indexes independently 8. Use recursion to build your tree, by using the split lists, remember true goes left using decide 9. Your features and classify should be in numpy arrays where for dataset of size (_m_ x _n_) the features would be (_m_ x _n_-1) and classify would be (_m_ x _1_) 10. The features are real numbers, you will need to split based on a threshold. Consider different approaches for what this threshold might be.First, in the `DecisionTree.__build_tree__()` method implement the above algorithm. Next, in `DecisionTree.classify()`, write a function to produce classifications for a list of features once your decision tree has been built.How grading works: 1. We load **_mod_complex_multi.csv_** and create our cross-validation training and test set with a `k=10` folds. We use our own `generate_k_folds()` method. 2. We fit the (folded) training data onto the tree then classify the (folded) testing data with the tree. 3. We check the accuracy of your results versus the true results and we return the average of this over k=10 iterations.#### Functions to complete in the `DecisionTree` class: 1. `__build_tree__()` 2. `fit()` 3. `classify()` —#### Part 2c: Validation _[7 pts]_* File to use: **_part23_data.csv_** * Allowed use of numpy, collections.Counter, and math.log * Grading: average test accuracy over 10 rounds should be >= 80%Reserving part of your data as a test set can lead to unpredictable performance as you may not have a complete set or proportionately balanced representation of the class labels. We can overcome this limitation by using k-fold cross validation. There are many benefits to using cross-validation. See book 3rd Ed Evaluating and Choosing the Best Hypothesis 18.4In `generate_k_folds()`, we’ll split the dataset at random into k equal subsections. Then iterating on each of our k samples, we’ll reserve that sample for testing and use the other k-1 for training. Averaging the results of each fold should give us a more consistent idea of how the classifier is doing across the data as a whole. For those who are not familiar with k folds cross-validation, please refer the tutorial here: [A Gentle Introduction to k-fold Cross-Validation](https://machinelearningmastery.com/k-fold-cross-validation/).How grading works: 1. The same as 2b however, we use your `generate_k_folds()` instead of ours.#### Functions to complete in the `submission` module: 1. `generate_k_folds()`—### Part 3: Random Forests _[25 pts]_* File to use: **_mod_complex_binary.csv, mod_complex_multi.csv_** * Allowed use of numpy, collections.Counter, and math.log * Allowed to write additional functions to improve your score * Allowed to switch to Entropy and splitting entropy * Grading: * 15 pts: average test accuracy over 10 rounds should be >= 50% * 20 pts: average test accuracy over 10 rounds should be >= 70% * 25 pts: average test accuracy over 10 rounds should be >= 80%Decision boundaries drawn by decision trees are very sharp, and fitting a decision tree of unbounded depth to a set of training examples almost inevitably leads to overfitting. In an attempt to decrease the variance of your classifier you are going to use a technique called ‘Bootstrap Aggregating’ (often abbreviated as ‘bagging’). Decision stumps are very short decision trees used in Ensemble classification such as Random Forests * They are usually short (depth limited) * They use smaller (but more of them) random datasets for training * They use a subset of attributes drawn randomly from the training set * They fit the tree to the sampled dataset and are considered specialized to the set * Advanced learners may use weighting of their classifier trees to improve performance * They use majority voting (every tree in the forest votes) to classify a sampleA Random Forest is a collection of decision trees, built as follows: 1. For every tree we’re going to build: 1. Subsample the examples provided us (with replacement) in accordance with a provided example subsampling rate. 2. From the samples, choose attributes (without replacement) at random to learn on (as specified in attribute subsampling rate) 3. Fit a decision tree to the subsample of data we’ve chosen (to a limited depth).When sampling attributes you choose from the entire set of attributes for every sample drawn (but specify the sampling method does not use replacement)Complete `RandomForest.fit()` to fit the decision tree as we describe above, and fill in `RandomForest.classify()` to classify a given list of examples. You can use your decision tree implementation or create another.Your features and classify should use numpy arrays datasets of (_m_ x _n_) features of (_m_ x _n_-1) and classify of (_n_ x _1_).To test, we will be using a forest with 80 trees, with a depth limit of 5, example subsample rate of 0.3 and attribute subsample rate of 0.3How grading works: 1. Similar to 2b but with the call to Random Forest.#### Functions to complete in the `RandomForest` class: 1. `fit()` 2. `classify()`— #### Part 4 (Optional) Boosting Competition Challenge (Extra Credit) #### Let the games begin! — 让游戏开始 — ας ξεκινήσει το παιχνίδι #### THIS WILL REQUIRE A SEPARATE SUBMISSION * Files to use: **_complex_binary.csv, complex_multi.csv_** * Allowed use of numpy, collections.Counter, and math.log * Allowed to write additional functions to improve your score * Allowed to switch to Entropy and splitting entropy * Ranked by Balanced Accuracy Weighted, Precision, and Recall * Ties broken by efficiency (speed) * Extra Credit Points towards your final grade: * 5 pts: 1st place algorithm test over 10 rounds * 4 pts: 2nd place algorithm test over 10 rounds * 3 pts: 3rd place algorithm test over 10 rounds * 2 pts: 4th place algorithm test over 10 rounds * 1 pt: 5th place algorithm test over 10 roundsDecision boundaries drawn by decision trees are very sharp, and fitting a decision tree of unbounded depth to a set of training examples almost inevitably leads to overfitting. In an attempt to decrease the variance of your classifier you are going to use a technique called ‘Boosting’ implementing one of the boosting algorithms such as, Ada-, Gradient- and XG-, boost or your personal favorite. Similar to RF, the Decision stumps are short decision trees used in these Ensemble classification methods * They are usually short (depth limited) * They use smaller (but more of them) random datasets for training with sampling bias * They use a subset of attributes sampled from the training set * They fit the tree to the sampled dataset and are considered specialized to the set * They use weighting of their sampling and classifiers to reflect the balance or unbalance of the data * They use majority voting (every tree in the forest votes) to classify a sampleAda-boost Algorithm example [Zhu, et al.]: N Samples, M classifiers, W weights, C classifications, K classes, I indicator Initialize the observation weights wi = 1/n, i = 1, 2, . . . , n. For m = 1 to M: Fit a classifier T(m)(x) to the training data using weights wi. Compute err(m) = Sum(i=1..n)wi I(ci != T(m)(xi)) / Sum(i=1..n) wi Compute α(m) = log (1−err(m)/err(m)) + log(K − 1). Set wi ← wi · exp (α(m) I(ci != T(m)(xi)), i = 1, . . . , n. Re-normalize wi. Output C(x) = argmax(k) sum(m=1..M) α(m) · I(T(m)(x) = k). [Multi-class AdaBoost](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.158.4221&rep=rep1&type=pdf) Zhu, Ji & Rosset, Saharon & Zou, Hui & Hastie, Trevor. (2006). Multi-class AdaBoost. Statistics and its interface. 2. 10.4310/SII.2009.v2.n3.a8.When sampling attributes you choose from the entire set of attributes without replacement based on the weighted distribution. Notice that you favor or bias towards misclassified samples, which improves your overall accuracy. Visualize how the short trees balance classification bias.Complete `ChallengeClassifier.fit()` to fit the decision tree as we describe above, and fill in `ChallengeClassifier.classify()` to classify examples. Use your decision tree implementation as your classifier or create another.Your features and classify should use numpy arrays datasets of (_m_ x _n_) features of (_m_ x _n_-1) and classes of (_m_ x _1_).How grading works: To test, we will be running 10 rounds, using your boosting with 200 trees, with a depth limit of 3, example subsample rate of 0.1 and attribute subsample rate of 0.2. You will have a time limit.#### Functions to complete in the `ChallengeClassifier` class: 1. `init()` 2. `fit()` 3. `classify()`— ### Part 5: Return Your name! _[1 pts]_ Return your name from the function `return_your_name()`—### Helper Notebook#### Note: You do not need to implement anything in this notebook. This part is not graded, so you can skip this part if you wish to. This notebook is just for your understanding purpose. It will help you visualize Decision trees on the dataset provided to you. The notebook Visualize_tree.iypnb can be use to visualize tree on the datasets. Things you can Observe: 1. How the values are split? 2. What is the gini value at leaf nodes? 3. What does internal nodes represents in this DT? 4. Why all leaf nodes are not at same depth?Feel free to change and experiment with this notebook. you can look and use Information gain as well instead of gini to see how the DT built based on that.### Video and picture attribution [cc]: All GT OMSCS materials are copyrighted and retain rights prohibiting redistribution or alteration Creative Commons sources share and share alike were used for the videos: [[File:Lucanus Cervus fighting.webm|Lucanus_Cervus_fighting]] Nicolai Urbaniak, CC0, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:2013-06-07_13-41-29-insecta-anim.ogv Thomas Bresson, CC BY 3.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Traumatic-insemination-and-female-counter-adaptation-in-Strepsiptera-(Insecta)-srep25052-s2.ogv Peinert M, Wipfler B, Jetschke G, Kleinteich T, Gorb S, Beutel R, Pohl H, CC BY 4.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Broscus_cephalotes.webm О.Н.П., CC BY-SA 4.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Termites_walking_on_the_floor_of_Eastern_Himalyan_rainforest.webm Jenis Patel, CC BY-SA 4.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:The_carrot_caterpillar_-.webm Contributor NamesPathé frères (France)Pathé Frères (U.S.)AFI/Nichol (Donald) Collection (Library of Congress)Created / PublishedUnited States : Pathé Frères, 1911., Public domain, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Gaeana_calling.webm Shyamal, CC BY-SA 3.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Kluse_-_Tenebrio_molitor_larvae_eating_iceberg_lettuce_leaf_v_02_ies.webm Frank Vincentz, CC BY-SA 3.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Lispe_tentaculata_male_-_2012-05-31.ogv Pristurus, CC BY-SA 3.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Xylotrupes_socrates_(Siamese_rhinoceros_beetle)_behavior.webm Basile Morin, CC BY-SA 4.0 , via Wikimedia Commons laser – https://commons.wikimedia.org/wiki/File:Mosquito_dosing_by_laser_2.webm – https://commons.wikimedia.org/wiki/File:Mosquito_dosing_by_laser_3.webm – https://commons.wikimedia.org/wiki/File:Mosquito_dosing_by_laser_4.webm Matthew D. Keller et al., CC BY 4.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Laser-induced-mortality-of-Anopheles-stephensi-mosquitoes-srep20936-s5.ogv Keller M, Leahy D, Norton B, Johanson T, Mullen E, Marvit M, Makagon A, CC BY 4.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Predicting-Ancestral-Segmentation-Phenotypes-from-Drosophila-to-Anopheles-Using-In-Silico-Evolution-pgen.1006052.s001.ogv Rothschild J, Tsimiklis P, Siggia E, François P, CC BY 4.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Focused_Laguerre-Gaussian_beam.webm Jack Kingsley-Smith, CC BY-SA 4.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:President_Reagan%27s_Remarks_at_Bowling_Green_State_University,_September_26,_1984.webm Reagan Library, CC BY 3.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:NID_Participants_Preparing_Their_Project_-_Workshop_On_Design_And_Development_Of_Digital_Experiencing_Exhibits_-_NCSM_-_Kolkata_2018-08-09_3141.ogv Biswarup Ganguly, CC BY-SA 4.0 , via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Davos_2017_-_An_Insight,_An_Idea_with_Sergey_Brin.webm World Economic Forum, CC BY 3.0 , via Wikimedia Commons Creative Commons Attribution-Share Alike 4.0

$25.00 View

[SOLVED] Cs 6601 assignment 3: bayes nets

In this assignment, you will work with probabilistic models known as Bayesian networks to efficiently calculate the answer to probability questions concerning discrete random variables.### ResourcesYou will find the following resources helpful for this assignment.*Canvas Videos:* Lecture 5 on Probability Lecture 6 on Bayes Nets*Textbook:* Chapter 13: Quantifying Uncertainty Chapter 14: Probabilistic Reasoning*Others:* [Markov Chain Monte Carlo](https://github.gatech.edu/omscs6601/assignment_3/blob/master/resources/LESSON1_Notes_MCMC.pdf) [Gibbs Sampling](http://gandalf.psych.umn.edu/users/schrater/schrater_lab/courses/AI2/gibbs.pdf) [Metropolis Hastings Sampling – 1](https://github.gatech.edu/omscs6601/assignment_3/blob/master/resources/mh%20sampling.pdf)### Setup1. Clone the project repository from Github“` git clone https://github.gatech.edu/omscs6601/assignment_3.git “`2. Navigate to `assignment_3/` directory3. Activate the environment you created during Assignment 0“` conda activate ai_env “`In case you used a different environment name, to list of all environments you have on your machine you can run `conda env list`.4. Run the following command in the command line to install and update the required packages“` pip install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html pip install –upgrade -r requirements.txt “`### SubmissionPlease include all of your own code for submission in `submission.py`.**Important: There is a TOTAL submission limit of 5 on Gradescope for this assignment. This means you can submit a maximum of 5 times during the duration of the assignment. Please use your submissions carefully and do not submit until you have thoroughly tested your code locally.****If you’re at 4 submissions, use your fifth and last submission wisely. The submission marked as ‘Active’ in Gradescope will be the submission counted towards your grade.**### RestrictionsYou are not allowed to use following set of modules from ‘pgmpy’ Library.>- pgmpy.sampling.* >- pgmpy.factor.* >- pgmpy.estimators.*## Part 1 Bayesian network tutorial:_[35 points total]_To start, design a basic probabilistic model for the following system:There’s a nuclear power plant in which an alarm is supposed to ring when the gauge reading exceeds a fixed threshold. The gauge reading is based on the actual temperature, and for simplicity, we assume that the temperature is represented as either high or normal. However, the alarm is sometimes faulty. The temperature gauge can also fail, with the chance of failing greater when the temperature is high.You will test your implementation at the end of each section.### 1a: Casting the net_[10 points]_Assume that the following statements about the system are true: > 1. The temperature gauge reads the correct temperature with 95% probability when it is not faulty and 20% probability when it is faulty. For simplicity, say that the gauge’s “true” value corresponds with its “hot” reading and “false” with its “normal” reading, so the gauge would have a 95% chance of returning “true” when the temperature is hot and it is not faulty. > 2. The alarm is faulty 15% of the time. > 3. The temperature is hot (call this “true”) 20% of the time. > 4. When the temperature is hot, the gauge is faulty 80% of the time. Otherwise, the gauge is faulty 5% of the time. > 5. The alarm responds correctly to the gauge 55% of the time when the alarm is faulty, and it responds correctly to the gauge 90% of the time when the alarm is not faulty. For instance, when it is faulty, the alarm sounds 55% of the time that the gauge is “hot” and remains silent 55% of the time that the gauge is “normal.”Use the following name attributes:>- “alarm” node: Represents the probability that an alarm system will be going off or not. >- “faulty alarm” node: Represents the probability that the alarm system is broken or not >- “gauge”: Represents the probability that the gauge will show either “above the threshold” or “below the threshold” (high = True, normal = False) >- “faulty gauge”: Represents the probability that the gauge is broken >- “temperature”: Represents the probability that the temperature is HOT or NOT HOT (high = True, normal = False)Use the description of the model above to design a Bayesian network for this model. The `pgmpy` package is used to represent nodes and conditional probability arcs connecting nodes. Don’t worry about the probabilities for now. Use the functions below to create the net. You will write your code in `submission.py`.Fill in the function `make_power_plant_net()`The following commands will create a BayesNet instance add node with name “alarm”:BayesNet = BayesianModel() BayesNet.add_node(“alarm”)You will use `BayesNet.add_edge()` to connect nodes. For example, to connect the alarm and temperature nodes that you’ve already made (i.e. assuming that temperature affects the alarm probability):Use function `BayesNet.add_edge(,)`BayesNet.add_edge(“temperature”,”alarm”)After you have implemented `make_power_plant_net()`, you can run the following test in the command line to make sure your network is set up correctly.“` python probability_tests.py ProbabilityTests.test_network_setup “`### 1b: Setting the probabilities_[15 points]_Now set the conditional probabilities for the necessary variables on the network you just built.Fill in the function `set_probability()`Using `pgmpy`’s `factors.discrete.TabularCPD` class: if you wanted to set the distribution for node ‘A’ with two possible values, where P(A) to 70% true, 30% false, you would invoke the following commands:cpd_a = TabularCPD(‘A’, 2, values=[[0.3], [0.7]])**NOTE: Use index 0 to represent FALSE and index 1 to represent TRUE, or you may run into testing issues.**If you wanted to set the distribution for P(A|G) to be| G |P(A=true given G)| | —— | —– | | T | 0.75| | F | 0.85|you would invoke:cpd_ag = TabularCPD(‘A’, 2, values=[[0.15, 0.25], [ 0.85, 0.75]], evidence=[‘G’], evidence_card=[2])**Reference** for the function: https://pgmpy.org/_modules/pgmpy/factors/discrete/CPD.htmlModeling a three-variable relationship is a bit trickier. If you wanted to set the following distribution for P(A|G,T) to be| G | T |P(A=true given G and T)| | — | — |:—-:| |T|T|0.15| |T|F|0.6| |F|T|0.2| |F|F|0.1|you would invokecpd_agt = TabularCPD(‘A’, 2, values=[[0.9, 0.8, 0.4, 0.85], [0.1, 0.2, 0.6, 0.15]], evidence=[‘G’, ‘T’], evidence_card=[2, 2])The key is to remember that first entry represents the probability for P(A==False), and second entry represents P(A==true).Add Tabular conditional probability distributions to the bayesian model instance by using following command.bayes_net.add_cpds(cpd_a, cpd_ag, cpd_agt)You can check your probability distributions in the command line with“` python probability_tests.py ProbabilityTests.test_probability_setup “`### 1c: Probability calculations : Perform inference_[10 points]_To finish up, you’re going to perform inference on the network to calculate the following probabilities:> – the marginal probability that the alarm sounds > – the marginal probability that the gauge shows “hot” > – the probability that the temperature is actually hot, given that the alarm sounds and the alarm and gauge are both workingYou’ll fill out the “get_prob” functions to calculate the probabilities: – `get_alarm_prob()` – `get_gauge_prob()` – `get_temperature_prob()`Here’s an example of how to do inference for the marginal probability of the “faulty alarm” node being True (assuming `bayes_net` is your network):solver = VariableElimination(bayes_net) marginal_prob = solver.query(variables=[‘faulty alarm’], joint=False) prob = marginal_prob[‘faulty alarm’].valuesTo compute the conditional probability, set the evidence variables before computing the marginal as seen below (here we’re computing P(‘A’ = false | ‘B’ = true, ‘C’ = False)):solver = VariableElimination(bayes_net) conditional_prob = solver.query(variables=[‘A’],evidence={‘B’:1,’C’:0}, joint=False) prob = conditional_prob[‘A’].values__NOTE__: `marginal_prob` and `conditional_prob` return two probabilities corresponding to `[False, True]` case. You must index into the correct position in `prob` to obtain the particular probability value you are looking for.If you need to sanity-check to make sure you’re doing inference correctly, you can run inference on one of the probabilities that we gave you in 1a. For instance, running inference on P(T=true) should return 0.20 (i.e. 20%). However, due to imprecision in some machines it could appear as 0.199xx. You can also calculate the answers by hand to double-check.## Part 2: Sampling_[65 points total]_For the main exercise, consider the following scenario.There are three frisbee teams who play each other: the Airheads, the Buffoons, and the Clods (A, B and C for short). Each match is between two teams, and each team can either win, lose, or draw in a match. Each team has a fixed but unknown skill level, represented as an integer from 0 to 3. The outcome of each match is probabilistically proportional to the difference in skill level between the teams.Sampling is a method for ESTIMATING a probability distribution when it is prohibitively expensive (even for inference!) to completely compute the distribution.Here, we want to estimate the outcome of the matches, given prior knowledge of previous matches. Rather than using inference, we will do so by sampling the network using two [Markov Chain Monte Carlo](https://github.gatech.edu/omscs6601/assignment_3/blob/master/resources/LESSON1_Notes_MCMC.pdf) models: Gibbs sampling (2c) and Metropolis-Hastings (2d).### 2a: Build the network._[10 points]_For the first sub-part, consider a network with 3 teams : the Airheads, the Buffoons, and the Clods (A, B and C for short). 3 total matches are played. Build a Bayes Net to represent the three teams and their influences on the match outcomes.Fill in the function `get_game_network()`Assume the following variable conventions:| variable name | description| |———|:——:| |A| A’s skill level| |B | B’s skill level| |C | C’s skill level| |AvB | the outcome of A vs. B (0 = A wins, 1 = B wins, 2 = tie)| |BvC | the outcome of B vs. C (0 = B wins, 1 = C wins, 2 = tie)| |CvA | the outcome of C vs. A (0 = C wins, 1 = A wins, 2 = tie)|Use the following name attributes:>- “A” >- “B” >- “C” >- “AvB” >- “BvC” >- “CvA”Assume that each team has the following prior distribution of skill levels:|skill level|P(skill level)| |—-|:—-:| |0|0.15| |1|0.45| |2|0.30| |3|0.10|In addition, assume that the differences in skill levels correspond to the following probabilities of winning:| skill difference (T2 – T1) | T1 wins | T2 wins| Tie | |————|———-|—|:——–:| |0|0.10|0.10|0.80| |1|0.20|0.60|0.20| |2|0.15|0.75|0.10| |3|0.05|0.90|0.05|You can check your network implementation in the command line with“` python probability_tests.py ProbabilityTests.test_games_network “`### 2b: Calculate posterior distribution for the 3rd match._[5 points]_Suppose that you know the following outcome of two of the three games: A beats B and A draws with C. Calculate the posterior distribution for the outcome of the **BvC** match in `calculate_posterior()`.Use the **VariableElimination** provided to perform inference.You can check your posteriors in the command line with“` python probability_tests.py ProbabilityTests.test_posterior “`**NOTE: In the following sections, we’ll be arriving at the same values by using sampling.****NOTE: pgmpy’s VariableElimination may sometimes produce incorrect Posterior Probability distributions. While, it doesn’t have an impact on the Assignment, we discourage using it beyong the scope of this Assignment.**#### Hints Regarding sampling for Part 2c, 2d, and 2e*Hint 1:* In both Metropolis-Hastings and Gibbs sampling, you’ll need access to each node’s probability distribution and nodes. You can access these by calling:A_cpd = bayes_net.get_cpds(‘A’) team_table = A_cpd.values AvB_cpd = bayes_net.get_cpds(“AvB”) match_table = AvB_cpd.values*Hint 2:* While performing sampling, you will have to generate your initial sample by sampling uniformly at random an outcome for each non-evidence variable and by keeping the outcome of your evidence variables (`AvB` and `CvA`) fixed.*Hint 3:* You’ll also want to use the random package, e.g. `random.randint()` or `random.choice()`, for the probabilistic choices that sampling makes.*Hint 4:* In order to count the sample states later on, you’ll want to make sure the sample that you return is hashable. One way to do this is by returning the sample as a tuple. ### 2c: Gibbs sampling _[15 points]_Implement the Gibbs sampling algorithm, which is a special case of Metropolis-Hastings. You’ll do this in `Gibbs_sampler()`, which takes a Bayesian network and initial state value as a parameter and returns a sample state drawn from the network’s distribution. In case of Gibbs, the returned state differs from the input state at at-most one variable (randomly chosen).The method should just consist of a single iteration of the algorithm. If an initial value is not given (initial state is None or and empty list), default to a state chosen uniformly at random from the possible states.Note: **DO NOT USE the given inference engines or `pgmpy` samplers to run the sampling method**, since the whole point of sampling is to calculate marginals without running inference.“YOU WILL SCORE 0 POINTS ON THIS ASSIGNMENT IF YOU USE THE GIVEN INFERENCE ENGINES FOR THIS PART”You may find [this](http://gandalf.psych.umn.edu/users/schrater/schrater_lab/courses/AI2/gibbs.pdf) helpful in understanding the basics of Gibbs sampling over Bayesian networks.### 2d: Metropolis-Hastings sampling_[15 points]_Now you will implement the independent Metropolis-Hastings sampling algorithm in `MH_sampler()`, which is another method for estimating a probability distribution. The general idea of MH is to build an approximation of a latent probability distribution by repeatedly generating a “candidate” value for each sample vector comprising of the random variables in the system, and then probabilistically accepting or rejecting the candidate value based on an underlying acceptance function. Unlike Gibbs, in case of MH, the returned state can differ from the initial state at more than one variable. This [slide deck](https://github.gatech.edu/omscs6601/assignment_3/blob/master/resources/mh%20sampling.pdf) provides a nice intro.This method should just perform a single iteration of the algorithm. If an initial value is not given, default to a state chosen uniformly at random from the possible states.Note: **DO NOT USE the given inference engines to run the sampling method**, since the whole point of sampling is to calculate marginals without running inference.“YOU WILL SCORE 0 POINTS IF YOU USE THE PROVIDED INFERENCE ENGINES, OR ANY OTHER SAMPLING METHOD”### 2e: Comparing sampling methods_[19 points]_Now we are ready for the moment of truth.Given the same outcomes as in 2b, A beats B and A draws with C, you should now estimate the likelihood of different outcomes for the third match by running Gibbs sampling until it converges to a stationary distribution. We’ll say that the sampler has converged when, for “N” successive iterations, the difference in expected outcome for the 3rd match differs from the previous estimated outcome by less than “delta”. `N` is a positive integer, `delta` goes from `(0,1)`. For the most stationary convergence, `delta` should be very small. `N` could typically take values like 10,20,…,100 or even more.Use the functions from 2c and 2d to measure how many iterations it takes for Gibbs and MH to converge to a stationary distribution over the posterior. See for yourself how close (or not) this stable distribution is to what the Inference Engine returned in 2b. And if not, try tuning those parameters(N and delta). (You might find the concept of “burn-in” period useful).You can choose any N and delta (with the bounds above), as long as the convergence criterion is eventually met. For the purpose of this assignment, we’d recommend using a delta approximately equal to 0.001 and N at least as big as 10.Repeat this experiment for Metropolis-Hastings sampling.Fill in the function `compare_sampling()` to perform your experimentsWhich algorithm converges more quickly? By approximately what factor? For instance, if Metropolis-Hastings takes twice as many iterations to converge as Gibbs sampling, you’d say that Gibbs converged faster by a factor of 2. Fill in `sampling_question()` to answer both parts.### 2f: Return your name_[1 point]_A simple task to wind down the assignment. Return your name from the function aptly called `return_your_name()`.

$25.00 View

[SOLVED] Cs6601 assignment 2 – skid isolation

This assignment will cover some of the concepts discussed in the Adversarial Search lectures. You will be implementing game playing agents for a variant of the game Isolation.We are also implementing this through Jupyter Notebook, so you all may find it useful to spend some time getting familiar with this software. During the first week of classes, there was an assignment [Assignment 0](https://github.gatech.edu/omscs6601/assignment_0/) that spends some time going through Python and Jupyter. If you are unfamiliar with either Python or Jupyter, please go through that assignment first!### Table of Contents – [Get repository](#repo) – [Setup](#setup) – [Jupyter](#jupyter) – [Jupyter Tips](#jupyter-tips) – [IDE](#IDE) ## Get repositoryPull this repository to your local machine:“` git clone https://github.gatech.edu/omscs6601/assignment_2.git “` ## SetupActivate the environment: “` conda activate ai_env “`In case you used a different environment name, to list of all environments you have on your machine you can run `conda env list`.Install additional package that will be used to for visualising the game board.“` pip install ipywidgets==7.5.0 “` ## JupyterFurther instructions are provided in the `notebook.ipynb`. Run:“` jupyter notebook “`Once started you can access [http://localhost:8888](http://localhost:8888/) in your browser. ## Jupyter TipsHopefully, [Assignment 0](https://github.gatech.edu/omscs6601/assignment_0/) got you pretty comfortable with Jupyter or at the very least addressed the major things that you may run into during this project. That said, Jupyter can take some getting used to, so here is a compilation of some things to watch out for specifically when it comes to Jupyter in a sort-of FAQs-like style**1. My Jupyter notebook does not seem to be starting up or my kernel is not starting correctly.** Ans: This probably has to do with activating virtual environments. If you followed the setup instructions exactly, then you should activate your conda environment using `conda activate ` from the Anaconda Prompt and start Jupyter Notebook from there.**2. I was running cell xxx when I opened up my notebook again and something or the other seems to have broken.** Ans: This is one thing that is very different between IDEs like PyCharm and Jupyter Notebook. In Jupyter, every time you open a notebook, you should run all the cells that a cell depends on before running that cell. This goes for cells that are out of order too (if cell 5 depends on values set in cell 4 and 6, you need to run 4 and 6 before 5). Using the “Run All” command and its variants (found in the “Cell” dropdown menu above) should help you when you’re in a situation like this.**3. The value of a variable in one of my cells is not what I expected it to be? What could have happened?** Ans: You may have run a cell that modifies that variable too many times. Look at the “counter” example in assignment 0. First, try running `counter = 0` and then `counter += 1`. This way, when you print counter, you get counter = 1, right? Now try running `counter += 1` again, and now when you try to print the variable, you see a value of 2. This is similar to the issue from Question 2. The order in which you run the cells does affect the entire program, so be careful. ## IDEIn case you are willing to use IDE (e.g. Pycharm) to implement your assignment in `.py` file. Please run:“`bash python helpers/notebook2script.py submission “`You will get autogenerated `submission/submission.py` file where you can write your code. However, make sure you have gone through the instructions in the `notebook.ipynb` at least once.

$25.00 View

[SOLVED] Cs6601 assignment 0 – python/jupyter/gradescope

This assignment is designed to help you prepare your local python environment, introduce you to jupyter notebooks and provide a refresher on python language. After following this README you will have a python environment ready and will be able to proceed with learning about jupyter notebooks in the `notebook.ipynb` (where you will make your first graded submission!). Let’s get started!### Table of Contents – [Get repository](#repo) – [Conda](#conda) – [Environment](#env) – [Packages](#pkg) – [Jupyter](#jupyter) – [Summary](#summary) ## Get repositoryFirst things first, let’s pull this repository to your local machine:“` git clone https://github.gatech.edu/omscs6601/assignment_0.git “`Then come back to this README to continue with further setup. ## Instructions to create a private forked repository for assignmentsThe assignments you would be working on throughout this semester will potentially require multiple revisions. A good way to track these revisions is by using your own private repo to backup your assignments at various stages of completion. Please remember that your assignment repository should be private and only accessible to yourself so that you do not accidentally violate the OSI policy.You can use the following steps to create a private repository for assignment 0. Please replace the A0 url with the future assignments’ URL to repeat this for the future assignments.* Login to github.gatech.edu and create a private repo named : YOUR_REPO. Double check that the repo is private, otherwise you may violate the OSI policy* Get the class repo “git clone –bare https://github.gatech.edu/omscs6601/assignment_0.git“* Mirror this to your private repo “` cd assignment_0.git git push –mirror https://github.gatech.edu/your_gatech_id/YOUR_REPO “`* You can now delete the “assignment_0.git“ directory cloned two steps ago if you wish.* Now clone your private repo on your local system “git clone https://github.gatech.edu/your_gatech_id/YOUR_REPO“* Next “` cd YOUR_REPO git remote add upstream https://github.gatech.edu/omscs6601/assignment_0.git “` You check if the remote branch has been added using “git remote -v“* Now you can use it like this “` git pull upstream master # the original repo git push origin master # your repo “` If you do not specify the remote, it will default to the origin (your repo)* If you are scared of pushing to upstream you can disable pushing to upstream using “git remote set-url –push upstream PUSH_DISABLED“ ## Conda![Conda Logo](https://conda.io/en/latest/_images/conda_logo.svg)Conda is an open source package and environment management system. Conda quickly installs, runs and updates packages/libraries and easily creates, saves, loads, and switches between environments on your local computer.Please download [Miniconda](https://docs.conda.io/en/latest/miniconda.html) and install it on your local machine. Although we require Python 3.7 for this course, you should install the version of Miniconda for any Python 3 version (e.g. Python 3.x). You can override this default by specifying the python version as 3.7 when creating the environment you will be working in for this course. You can access conda via the console to make sure it’s properly installed. For instance, you can run `conda -V` to display the version.On Windows, to access `conda` via the console please use “Anaconda Prompt” or “Anaconda Powershell Prompt” instead of “Command Prompt”. ## EnvironmentEnvironments are used to keep different python versions and packages isolated from each other, generally each project/application will have an independent python environment. For example, we will be using Python 3.7 and packages like numpy, networkx etc, and we want them to be isolated from any other python projects you might have.To create a new environment simply run:“` conda create –name ai_env python=3.7 -y “`Once it’s created you can activate it by running:“` conda activate ai_env “`The environment is not attached to any specific folder, and you can freely navigate to different directories while it’s activated. If you want to change the environment you can deactivate it using `conda deactivate` and then activate another env. To see the list of all environments you have on your machine you can run `conda env list`. ## Packages![Python Logo](https://www.python.org/static/community_logos/python-logo-master-v3-TM.png)We will be using multiple python packages throughout this class. Here are some of them:* **jupyter** – interactive notebook (you will learn more about them soon) * **numpy** – a package for scientific computing (multi-dimensional array manipulation) * **matplotlib** – a plotting library * **networkx** – a package for manipulating networks/graphs * **pandas** – a package for data analysis * **pgmpy** – library for probabilistic graphical modelsYou can see the complete list of packages and required versions in [./requirements.txt](./requirements.txt).We can install all these packages using command “pip install -r requirements.txt“. Please navigate to the `assignment_0/` directory, activate your environment (`conda activate ai_env`), then run:“` pip install -r requirements.txt “`Once installed, you can run `pip freeze` to see the list of all of the packages installed in your `ai_env` environment.> **Note:** If you are on Windows, students in the past have commonly reported an error during package installation that resembles the error in this [Github post](https://github.com/pytorch/pytorch/issues/34798). To fix this issue, head over to the [PyTorch site](https://pytorch.org) and follow the instructions to install torch manually in `ai_env`. If this does not work, you may also instead try running `conda install -c ankurankan pgmpy=0.1.10`. After trying one of the previous suggestions and getting a successful install, try `pip install -r requirements.txt` again. ## Jupyter![Jupyter Logo](https://jupyter.org/assets/nav_logo.svg)Now that you have set up the environment it’s time to learn more about the jupyter notebooks.We have already installed jupyter. To open it up you can run:“` jupyter notebook “`It will start a python kernel which you can access via [https://localhost:8888](https://localhost:8888/) in your browser. For the rest of the assignment proceed to `notebook.ipynb`. ## SummaryYou have now installed conda package and environment manager, created a python environment and installed all the necessary packages.Please always remember to run: “` conda activate ai_env “` to activate your environment before you start working on your assignments.

$25.00 View

[SOLVED] Assignment 3 Ajax JSON Responsive Design and Nodejs Java

Assignment 3: Ajax, JSON, Responsive Design and Node.js Stock Search (AJAX/JSON/HTML5/Bootstrap/Angular /Node.js/Cloud Exercise) 1. Objectives ●   Get familiar with the AJAX and JSON technologies ●   Use a combination of HTML5, Bootstrap and Angular on client side ●   Use Node.js on server side ●   Get familiar with Bootstrap to enhance the user experience using responsive design ●   Get hands-on experience of Cloud services hosting NodeJS/Express ●   Learn to use popular APIs such as the Finnhub API, Polygon.io API and Highcharts API ●   Learn how to manage and access a NoSQL DBMS like MongoDB Atlas, in the cloud 2. Background 2.1 AJAX and JSON AJAX (Asynchronous JavaScript. + XML) incorporates several technologies ●   Standards-based presentation using CSS ●   Result display and interaction using the Document Object Model (DOM) ●   Data interchange and manipulation using XML and JSON ●   Asynchronous data retrieval using XMLHttpRequest ●   JavaScript. binding everything together See the class slides on D2L Brightspace. JSON, short for JavaScript Object Notation, is a lightweight data interchange format. Its main application is in AJAX web application programming, where it serves as an alternative to the use of the XML format for data exchange between client and server. See the class slides on D2L Brightspace.    2.2 Bootstrap Bootstrap is a free collection of tools for creating responsive websites and web applications. It contains HTML and CSS-based design templates for typography, forms, buttons, navigation, and other  interface  components,  as well  as optional JavaScript extensions. To learn more details about  Bootstrap please refer to the lecture material on Responsive Web Design  (RWD). We recommend using Bootstrap 4.6 through 5.3, Angular  12 through  17, and ng-bootstrap 11 through  16 in this assignment. See the RWD class  slides on D2L Brightspace for the list of dependencies between these various versions. 2.3 Cloud Services 2.3.1 Google App Engine (GAE) Google App Engine applications are easy to create, easy to maintain, and easy to scale as your traffic and data storage needs change. With App Engine, there are no servers to maintain. You simply upload your application and it’s ready to go. App Engine applications automatically scale based on incoming traffic. Load balancing, microservices, authorization, SQL and NoSQL databases, memcache, traffic splitting, logging, search, versioning, roll out and rollbacks, and security scanning are all supported natively and are highly customizable. To learn more about GAE support for Node.js visit this page: https://cloud.google.com/appengine/docs/standard/nodejs/ 2.3.2 Amazon Web Services (AWS) AWS is Amazon’s implementation of cloud computing. Included in AWS is Amazon Elastic Compute Cloud (EC2), which delivers scalable, pay-as-you-go compute capacity in the cloud, and AWS Elastic Beanstalk, an even easier way to quickly deploy and manage applications in the AWS cloud. You simply upload your application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. Elastic Beanstalk is built using familiar software stacks such as the Apache HTTP Server, PHP, and Python, Passenger for Ruby, IIS for .NET, and Apache Tomcat for Java. To learn more about AWS support for Node.js visit this page: https://aws.amazon.com/getting-started/projects/deploy-nodejs-web-app/ 2.3.3 Microsoft Azure Microsoft Azure is a cloud computing service created by Microsoft for building, testing, deploying, and managing applications and services through a global network of Microsoft-managed data centers. It provides software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS) and supports many different programming languages, tools, and frameworks, including both Microsoft-specific and third-party software and systems. To learn more about Azure support for Node.js visit this page: https://docs.microsoft.com/en-us/javascript/azure/?view=azure-node-latest 2.4 Angular Angular is a toolset for building the framework most suited to your application development. It is fully extensible and works well with other libraries. Every feature can be modified or replaced to  suit your unique development workflow and feature needs. Angular combines declarative templates, dependency injection, end-to-end tooling, and integrated best practices to solve development challenges. Angular empowers developers to build applications that live on the web, mobile, or the desktop. For this homework, Angular 12+ (Angular 12, through 17) can be used, but Angular 12 is recommended. Please note Angular 12+ will need familiarity with Typescript. and component-based programming. To learn more about Angular, visit this page: https://angular.io/ 2.5 Node.js Node.js is a JavaScript. runtime built on Chrome's V8 JavaScript. engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js package ecosystem, npm, is the largest ecosystem of open-source libraries in the world. To learn more about Node.js, visit: https://Node.js.org/en/ Also, Express.js is strongly recommended. Express.js is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile applications. It is in fact the standard server framework for Node.js. To learn more about Express.js, visit: http://expressjs.com/ Important Note: All APIs calls should be done through your Node.js server 3. High-Level Description In  this  exercise  you  will  create  a  webpage  that  allows  users  to  search  for  stocks  using the Finnhub  API  and  display  the  results  on  the  search  page.  The  application  evolves  from  the previous homework. A user will first open a page as shown below in Figure 1, where the user can enter a stock ticker symbol and select from a list of matching stock symbols using “autocomplete.” A quote on a matched stock symbol can be performed. The description of the Search Box is given in Section 3.1. Instructions on how to use the API are given in Section 4. All implementation details and requirements will be explained in the following sections. There are 4 routes for this application: a)  Home  Route  [‘/’]  redirected  to   [‘/search/home’]–  It  is  the  default  route  of this application. b)  Search  Details  Route   [‘/search/’]  –  It  shows  the  details  of the   searched c)  Watchlist Route [‘/watchlist’] – It displays the watchlist of the user. d)  Portfolio Route [‘/portfolio’] – It displays the portfolio of the user. When a user initially opens your web page, the initial search page should look like in Figure 1.   Figure 1: Initial Search Page 3.1 Search Page / Homepage 3.1.1 Design You must replicate the Search Bar displayed in Figure 1 using a Bootstrap form. The Search Bar contains three components. 1.   Stock Ticker: This is a text box, which enables the user to search for valid stocks by entering keywords and/or accepting a suggestion of all possible tickers. Notice the “helper” text inside the searchbox. 2.   Search Button: The “Search” button (which uses the widely used search icon), when clicked, will read the value from the textbox and send the request to the backend server. On a successful response, details for that stock will be displayed. 3.   Clear button: The ‘clear ’ (cross marked) button, would clear out the currently searched results page and show the initial search page. 3.1.2 Search Execution Search can be executed in the following ways: 1.   Once the user enters a ticker symbol and directly presses the Return key or clicks on the “Search” button, without using the auto-complete suggestion, your application should make an HTTP call to the Node.js script hosted on GA/AWS/Azure back end (the Cloud Services). The Node.js script on Cloud Services will then make a request to the Finnhub API services to get the detailed information. If the entered ticker is invalid and no data is found, an appropriate error message should be displayed. If valid stock data is found, the search results should be loaded. 2.   Once the user starts typing a ticker symbol, autocomplete suggestions (See Section 3.1.3 below)  will  populate  below  the  search  bar.  A  matched  ticker  can be  selected. Upon clicking  the  dropdown  selection,  the  search  should  start  automatically,  and  execute identically as described in the previous paragraph. 3.1.3 Autocomplete A Search Bar allows a user to enter a keyword (Stock ticker symbol) to retrieve information. Based on the user input, the text box should display a list of all the matching company’s ticker symbols with the company’s name (see Figure 2). The autocomplete JSON data is retrieved from the Finnhub Search API (refer to Section 4.1.4). The autocomplete response is filtered using the criteria: type= ‘Common Stock’, Symbol doesn’t contain ‘.’(dot) These are examples of calling this API: https://finnhub.io/api/v1/search?q=&token= # or https://finnhub.io/api/v1/search?q=&token= For example: https://finnhub.io/api/v1/search?q=apple&token= The autocomplete function should be implemented using Angular Material. Refer to Section 5.3 for more details.   Figure 2: Autocomplete Suggestion

$25.00 View

[SOLVED] MATH2003J OPTIMIZATION IN ECONOMICS BDIC 2023/2024 SPRING Problem Sheet 11 R

MATH2003J,  OPTIMIZATION IN  ECONOMICS, BDIC  2023/2024,  SPRING Problem Sheet 11 Θ  Question 1: Consider the sets: A = {(x, y) ∈ R2  ∣ x2 + y2  ≤ 4, ∣y∣ ≤ 1} B = {(x, 0) ∈ R2  ∣ x ∈ [0, 10]} C = {(x, y) ∈ R2  ∣ y2  ≤ x − 4, x ≤ 8} (a). Sketch the set A U B U C and decide if it is closed and bounded. (b). Are A, B , C convex? Is the set A U B U C convex? Justify your answer. (c). Consider the function f : A U B U C → R defined by f(x, y) = x + y.  Find the absolute maximum and minimum of f. Question 2: Decide if the following sets are convex or not, and justify your answers: (1) A = {(x, y) ∣ x + y ≤ 10, x ≥ 0, y ≥ 0} (2) B = {(x, y) ∣ x2 + y2 ≤ 4} (3)  C = Z ⊂ R. (4)  D = {sin(1/x) ∣ x ∈ (0, +∞)} ⊂ R. (5)  E = {(x, y) ∈ R2  ∣ y2  ≤ x}.

$25.00 View

[SOLVED] EA50JG Offshore Structural Design 9 Finite Element Analysis Processing

EA50JG Offshore Structural Design – Jacket Platforms 9       Finite Element Analysis 9.1     Introduction Structural analysis is the process of determining the action effects in a structure or structural component in response to  a  given  set  of actions.  Structural  analysis  is required to  demonstrate that the  design  of a platform. satisfies the relevant design code. Action effects required for the design of jacket structures typically include the following: ●    Internal section forces, which shall not exceed the strength of the section (checked using member strength checks); ●    Support reactions, from which the required foundation capacity can be determined; ●    Displacements  and  vibrations,  which  shall  be  within  acceptable  limits  for  operation  of  the structure; Various calculation methods may be used for the determination of action effects in response to a given set of actions.  These  include,  but  are  not  limited  to,  hand  calculations  and  computer  methods,  such  as spreadsheets and finite element analyses (FEAs). 9.2     Types of analysis Different analyses that may be required in the design of a jacket structure are discussed in the sections below.  The applicability of different analysis types for checking the design conditions described in Lecture 5 are shown in Table 9.1 below. Table 9.1 Applicability of Different Analysis Types 9.2.1   Static/Quasi-Static Linear Elastic Analysis Static analysis is appropriate when dynamic effects are minimal and can be assumed to be covered by either: the partial action and resistance factors (LRFD) or the applied safety factor (ASD). Quasi-static analysis is applicable when dynamic effects can be assumed to be approximately uniform throughout the structural systems and so small that one static analysis or a series of static analyses, with a small correction for dynamic effects can adequately account for the dynamic response. The correction for the dynamic response is often applied as a dynamic amplification factor (DAF) on the applied loading. Linear analysis of offshore jackets can be carried out using a wide range of different software packages. Many offshore  specific packages exist that include modules to calculate wave kinematics and member hydrodynamic forces and to solve for pile head displacements. 9.2.2   Natural Frequency Analysis A natural frequency analysis is required to calculate the natural frequency and period (period=1/frequency) of a platform. This gives an indication of whether dynamic behaviour will be significant. Structures for which dynamic behaviour is significant are generally referred to as dynamically responding structures. Redundant, multi-legged fixed structures (e.g. jackets, towers, etc.), with fundamental natural periods or having one or more components with natural periods greater than 2.5s to 3s usually respond dynamically to wave action during sea tow or in-place situations. For other types of structures, such as mono-towers and caissons, dynamic behaviour can be significant even with natural periods of 1s or less. To  calculate the natural period  of the platform a reasonably accurate  structural model, including both stiffness and mass is required. Dynamic behaviour is likely to be  significant if any natural  frequency, particularly the fundamental frequency, is similar to the frequency of an excitation (typically the wave frequency). 9.2.3   Dynamic Linear Analysis When  dynamic response  is  considered  significant  (typically if there  is  interaction between the natural period of the structure and the period of the loading), the structural system should be designed and analysed for dynamic behaviour. For a dynamic linear analysis an accurate structural model including both stiffness and mass is required. The type of analysis is governed by the form. of applied actions: ●    Steady state analysis in response to harmonic actions, as required for spectral analysis; ●    Transient analysis in response to arbitrary time-history actions, as can be required for accidental situations and non-linear actions due to waves or earthquakes. For both types of analysis, the behaviour of the structure and the foundation are assumed to be linear elastic. 9.2.4   Non-Linear Analysis The  collapse  of a  space  frame.  structure usually results  from progressive  failure  of its components, in particular its primary members and/or joints. Linear analysis can be used to check the  strength of the structural  components  against the  applied  loading, however to  investigate the redistribution of internal forces following a component failure, and the prediction of collapse behaviour a non-linear analysis is required. Non-linear analysis can be used to account for three forms of non-linearity: ●    Geometric  non-linearities  occur if a  structure experiences large deformations under the applied loading. The changing geometric configuration can cause the structure to respond nonlinearly. ●    Material  non-linearities  occur when a material is  stressed beyond its yield point and begins to behave plastically. ●    Contact  non-linearities  occur  when  deformation  of  the   structure  results  in  a  change  to  the structures boundary conditions. Non-linear analysis may be required if a structure is subjected to abnormal environmental actions due to wind, wave and current or an earthquake, or to accidental actions from ship impact, fire or explosion, and when a linear analysis predicts: ●    Displacements of a magnitude that are likely to cause second order (P-Δ) effects, ●    Joint failure, ●    Member buckling, and/or ●    Stresses that exceed the yield strength of the material, For these cases non-linear analysis may be required to justify that the overall structural integrity of the platform. is not impaired. 9.2.5   Reliability Analysis Structural reliability analysis can be used to calculate the probability of failure of a jacket  structure that does not meet the required acceptance criteria when analysed using a conventional linear or non-linear analysis. Reliability analysis is often used to reassess existing jacket structures (often designed to earlier now superseded design codes). In general reliability analysis methods should not be required for the design of new structures. 9.3     Analysis model The analytical models used in offshore engineering are in some respects similar to those adopted for other types of steel structures. Only the salient features of offshore models arepresented here. The same model is used throughout the analysis process with only minor adjustments being made to suit the specific conditions, e.g. at supports in particular, relating to each analysis. An example of a jacket structural analysis model is shown in Figure 9.1. 9.3.1   Beam models Members The structural analysis model of a jacket predominantly consists of beam elements representing the axial, bending,  shear  and  torsional   stiffness   of  the  structural  members.  In   some  cases  special  modelling arrangements (either using shell elements or equivalent sections) are used to represent pile clusters and large diameter members provided for storage or flotation. In  addition to  its  geometrical  and material properties,  each member  is  characterised by hydrodynamic coefficients, e.g. relating to drag, inertia, and marine growth, to allow wave forces to be automatically generated. The  structure  shall  be  modelled  in  detail  and  should  include  the  primary  and  secondary  structures, conductors, and appurtenances to ensure that action effects are accurately predicted. If this is not possible, the necessary detail of the model shall be prioritized as follows, in the order given: 1. Primary Structure; 2. Secondary Structure (conductor supports and framing); 3. Components provided for Temporary Conditions (launch framing, mudmats etc.); 4. Conductors; 5. Appurtenances. When  the   structural   contribution  of  any   component   is  neglected,  the   self-weight,  buoyancy  and hydrodynamic actions on the component shall still be included in the model.   Figure 9. 1       Jacket structural analysis model Joints Each member is normally rigidly fixed at its ends to other elements in the model. If more accuracy is required, particularly for the assessment of natural vibration modes, local flexibility of the connections may be represented by a joint stiffness matrix. For typical jackets, depending on the diameter of the  chord, the length between the physical end of the brace stub and the centre line of the chord can be significant, and can affect the calculation of member end forces and stresses, weights, masses, hydrodynamic and hydrostatic actions. In such cases, it is customary to model the length of braces between the outer surface of the chord and its centre line as rigid connections (as shown in Figure 9.2); joint flexibility of brace and chord connections is thus neglected.   Figure 9.2       Joint rigid link definitions 9.3.2   Foundation model The stiffness of a piled foundation generally displays non-linear characteristics. The foundation should be modelled  and  analysed  using  non-linear  soil  p-y,  t-z  and  Q-z  curves  as  described  in  Lecture  7.  It  is important to ensure compatibility between the forces and displacements at the pile heads calculated with both the non-linear pile model and linear jacket model. To achieve this the pile stiffness in the jacket model is usually represented by an equivalent load-dependent secant stiffness matrix; coefficients are determined by an iterative process where the forces and displacements at the common boundaries of structural and foundation models are equated. This matrix may need to be adjusted to the mean reaction corresponding to each loading condition. 9.3.3   Topsides For  structures where the  stiffness of the topside and jacket do not interact  significantly the jacket and topside can be modelled separately. If separate models for the structure and the topsides structure are used, the stiffness of the topsides structure and its interface with the structure should be modelled in sufficient detail to allow its self-weight and applied actions to be calculated and applied to the jacket support points. Where the structure and the topsides structure interact significantly, a combined model of structure and topsides structure should be used. 9.3.4   Conductors Conductors can be modelled as beam elements with appropriate releases at the guide frame support points. At typical guide locations the conductor should be free to move axially and rotationally with only lateral support. For jacket structures, the deadweight of conductors is usually self-supported. Care should betaken that appropriate releases are included to ensure that the conductors do not transfer topside loads to the seabed. 9.3.5   Appurtenances The contribution of appurtenances (risers, J-tubes, caissons, boat-fenders, etc.) to the overall stiffness of the structure is normally neglected. They are often therefore analysed separately and their reactions applied as loads at the interfaces with the main structure. 9.3.6   Plate models Integrated decks and hulls of floating platforms involving large bulkheads are described by plate elements. The characteristics assumed for the plate elements depend on the principal state of stress which they are subjected to. Membrane stresses are taken when the element is subjected merely to axial load and shear. Plate stresses are adopted when bending and lateral pressure are to betaken into account. 9.3.7   Loadings Functional loads Functional loads consist of: ●    Deadweight of structure and equipments. ●    Live loads (equipments, fluids, personnel). Depending on the area of structure under scrutiny, live loads must be positioned to produce the most severe configuration (compression or tension); this may occur for instance when positioning the drilling rig. For dynamic analysis all applied functional loads must be converted to masses. Environmental Loads Environmental loads consist of wave, current and wind loads assumed to act simultaneously in the same direction. In general eight wave incidences are selected; for each the position of the crest relative to the platform. must be established such that the maximum overturning moment and/or shear are produced at the mudline. In general, environmental loading from the platform. orthogonal directions (platform. North, East, West and South) will maximise the loading in the jacket bracing. Environmental loading from diagonal  directions will maximise loading in the jacket legs and foundations. When analysing diagonal directions the wave approach angle should be modified to minimize the lever arm between the jacket legs resisting the applied overturning moment as shown in Figure 9.3.   Figure 9.3       Diagonal wave approach directions References [1]. API-RP 2A-WSD: Recommended practice for planning, designing and constructing fixed offshore platforms. American Petroleum Institute 21st  Edition. Errata and Supplement 2 October 2005. [2].  International  Standard,  ISO   19902-2007,  Petroleum  and  natural  gas  industries  -  Fixed  steel offshore structures, 2007. [3]. NORSOK, NORSOK STANDARD N-003 Actions and Action Effects, Edition 2 September 2007. [4]. DNV: Offshore Standard DNV-OS-C101, Design of Offshore Steel Structures, General (LRFD Method), April 2011. [5]. ESDEP, WG 15A : Structural Systems: Offshore, 1993

$25.00 View

[SOLVED] EXMBM521-24B HAM Strategic Management and Decision Making SQL

EXMBM521-24B (HAM) Strategic Management and Decision Making What this paper is about Strategic Management and Decision-Making is designed as an applied integrative approach to strategy emphasising the current strategic issues of business and modern approaches to resolving them. In a world of uncertainty how should modern businesses best deal with supply chain issues, digital transformation, potentially conflicting sustainability and growth pressures and other complex challenges. The process and mindset of strategy are critical to enhance the likelihood of success. We combine conceptual models of strategy with guest speakers and cases to elaborate, explain and tackle a balanced approach to modern strategy decisions. How this paper will be taught This course is run as a flexi paper so all class materials, workshops and guest speakers will be available online. Ideally students, if not attending face-to-face, will be able to watch workshops online synchronously so as to be able to engage in the interactive nature of class. There are tutorials that begin in week 2 to allow us to more deeply explore cases and the application of models and strategic thinking to real world situations. Cases also drive the core of the assessment in this course culminating in a case competition. What you will study Topic Introduction Big picture of strategy and decision-making Broad context of strategy decision Macro-factors that influence strategy Competitive Context Industry-level analysis Resources and Capabilities Internal analysis of competitive advantage Strategy & Data How we use data to develop strategy - particularly around customers Simple Strategy Business strategy basics Complex strategy Corporate strategy - how to strategically manage larger corporations Dynamic strategy Innovation and disruption Family business Strategic issues for family enterprises Sustainability and Strategy Companies with conscience Execution on strategy Moving beyond the plan Case Competition and Conclusion The big finale

$25.00 View

[SOLVED] PADM-GP 4503001 EXEC-GP 4503 Introduction to Data Analytics for Public Policy Administration

PADM-GP 4503.001 | EXEC-GP 4503 Introduction to Data Analytics for Public Policy, Administration, and Management Spring 2025 Course Description This course aims to establish a first principles understanding of qualitative and quantitative techniques, tools, and processes used to wield data for effective decision-making.  Its approach focuses on pragmatic, interactive learning using logical methods, basic tools, and publicly available data to practice extracting insights and building recommendations.  It is designed for students with little prior statistical or mathematical training and no prior pre-exposure to statistical software. Course and Learning Objectives Students will be able to: ●    Explain the value of data, assess data arguments, identify alternatives to using data, and leverage administrative data to ground-truth research data. ●    Structure problems, develop hypotheses, identify assumptions, and reference sources and considerations in a rigorous and transparent manner. ●     Identify, obtain, understand, prepare, and analyze data using standard approaches and industry-standard tools. ●    Package and persuade with data visualization techniques and tools [PowerPoint, Excel, Tableau] to reach specific objectives. How this Course Relates to Other Courses This is a foundational course.  There are no prerequisites. It is designed to introduce students to first principles approaches to data analytics to build their comfort in navigating ambiguity, leveraging quantitative skills, and using industry-standard data tools and technologies. Evaluation The course will be evaluated through class participation [as measured by short quizzes and exit surveys] (25%), two problem sets (25%), and one final project (50%).  Problem sets will use Excel and PowerPoint, so students should ensure they are familiar with how to access these applications. Late Policy Assignments are due on the class dates indicated on the course’s NYU Brightspace site.  Late submission of assignments will lead to a two-point reduction for missing the deadline and another two-point reduction for each day thereafter until submitted. Course Structure The class includes lectures, readings, break-out session group work, and independent project work.  Class attendance is critical as the course is structured as an experiential learning course. Students are strongly encouraged to apply approaches and tools learned in the course to their specific sector interests to deepen their content knowledge and understand the forces shaping trends in that sector.

$25.00 View

[SOLVED] MWL101 - Professional Insight C/C

MWL101-Professional Insight Tri3 2024 My Application - Individual Assessment DUE DATES: Task 1 - Report:                      Monday, 27 January (Week 11) by 8:00pm (Melb. time) Task 2 - Mock Interview        Begin Tuesday 28th January 2025 (bookings open wk10) PERCENTAGE OF FINALGRADE:         50% (mock  interview  10% + report 40%) WORD COUNT:                                    2,000 word report + mock interview FILE Type                                         Microsoft WORD(NO PDF) Description A dynamic,global and technology driven business environment means the way organisations recruit is undergoing significant change.Trends in recruitment include increased automation of the recruitment process,increase demand for a diverse range of candidates and more emphasis on transferrableskills, amongst others¹(Schidman et al.2017). Using the knowledge and skills developed through Assessments 1and 2,you will source a role that matches your desired career path,research the organisation you are applying to,write an application for that position and prepare for and participate in a mock interview.During this process of applying for ajob you will need to reflect on your current learnings,activities and skill development,and on the skills and knowledge you still need to develop to be career ready. All three assessments in this unit are designed to help you develop the skills needed to becareer ready in adynamic employment market.This is consistent with the Unit Learning Outcomes and Graduate Learning Outcomes.    

$25.00 View

[SOLVED] EXMBM524-24B HAM Financial Analysis R

EXMBM524-24B (HAM) Financial Analysis What this paper is about Every manager must face the tasks of finding, interpreting and utilising accounting information. This paper explores the process of accounting for organisations, focusing on financial and management accounting areas as well as financial management concepts. Sessions will introduce participants to a range of accounting techniques  and concepts. The accounting systems provide information for decision- making, and it is important to understand the strengths and limitations of the information. Themes covered include: where financial information comes from, how it is recorded, and how it is reported. The paper explains: accounting conventions and principles, key performance indicators, as well as the control of financial systems. Management accounting themes covered include short run decision making, taxation, both domestic as well as international issues, as well as considerations  of Key Performance Indicators (KPIs) and financial analysis. An introduction to Corporate Social Responsibility (CSR), Stakeholder Management and Sustainability will be provided, and their influences on corporate decision making discussed. Issues relating to contracts, Consumers Guarantees Act and the Fair-Trading Act in the business context will also be discussed in this paper. Throughout this paper, topics will be introduced and illustrated with case examples and practical exercises to encourage discussion and debate. This approach of linking theory to practice will ensure that any limitations or difficulties associated with the models and concepts are identified. How this paper will be taught Students are strongly encouraged to attend sessions in person at the Hamilton campus. The teaching philosophy of this paper benefits most from in-person interaction with lecturers and fellow students. However, for those students who are not in Hamilton or need to self-isolate, a FLEXI option is offered. Students can join sessions online via Zoom. Whether you attend in-class or on Zoom, sessions are synchronous**, meaning that you need to be present and participate during the scheduled slot. Although sessions are recorded, group activities cannot be recorded and much of your learning will happen during these activities. Any changes to the paper's structure necessitated by a lecturer having  to self-isolate will be communicated in Moodle and via announcements. A combination of learning activities is used to assist your understanding, answer questions, analyse cases, and provide insights that help you 'scaffold' learning. Recorded video lectures, online content, and required reading will be prescribed. It is essential that you prepare for each session as instructed. **If you cannot attend a session due to illness or another legitimate reason, please notify the relevant lecturer before the session. You will be expected to watch the session's recording in your own time. Please note, some in-class activities will not be available after the fact. Online resources are provided in Moodle for you to download and review before the sessions. What you will study Topic Introduction to Accounting Introduction to Accounting; Financial Statements Financial Analysis Cash Flow Analysis Management Accounting Test 1 Relevant costs for short-term decisions Full costing Financial Planning Making Capital Investment Decisions Reporting and Interpreting Owners' Equity; Accounting Information and the Balanced Scorecard; Accounting for Sustainability Contracts, Consumers Guarantees and Fair Trading

$25.00 View

[SOLVED] EXMBM523-24B HAM Digital Business and Supply Chain Management C/C

EXMBM523-24B (HAM) Digital Business and Supply Chain Management What this paper is about This paper addresses two connected areas of business which all managers need to be familiar with, including Supply Chain Management and Digital Business. Digital Business discusses technology developments that are not only changing how our businesses operate but are ushering in novel business models with the use of data analytics software. Both the digitisation of operations and digital business models have implications for a company’s supply chain. Therefore, half of the paper deals with Supply Chain Management while the other half deals with Digital Business topics. How this paper will be taught This paper will be taught using a variety of online and in-class teaching and learning strategies that give you the flexibility to participate in discussion sessions and participate online. In the Digital Business half, classes are divided between Lecture and   Lab works. A combination of learning activities are used to assist your  understanding, answer questions, analyse cases, and provide insights that help you 'scaffold' learning. Recorded video lectures, online content, and required reading will be prescribed. Whether you attend in-class or on Zoom, sessions are synchronous, meaning that it is suggested that you participate during the time scheduled slot. Although lectures are recorded, group/practical activities cannot be recorded and part of your learning will happen during these activities. Thus, if you are not able to synchronously participate in the class, it is expected that you would be responsible for using the online resources to finish the study and the assessments. Refer to Moodle for more details. What you will study Topic Intro to DigiBusiness, Lab 1 Big Data, Lab 2 Data Analytics, Lab 3 AI, Lab 4 Guest Speaker (TBC), Lab 5 Online Presentation Due, Lab 6 Introduction to Supply Chain Management Supply Chain Strategies Operations Management Manufacturing and Service Processes Materials Requirement Planning Lean Management

$25.00 View

[SOLVED] EXMBM522-24C HAM Marketing Strategy Matlab

EXMBM522-24C (HAM) Marketing Strategy What this paper is about This paper has been designed to provide participants with a broad understanding of the basic concepts of marketing and marketing strategy. Marketing is about understanding and creating customer value, and is at the heart of any successful organisation. Core concepts introduced include the marketing mix, environmental analysis, consumer behaviour, segmentation, target marketing and positioning, brand equity, and an introduction to marketing research. We will build on these concepts as  we uncover the marketing process from the analysis needed to develop marketing strategy, through marketing objectives, to communication objectives and activation. On completion of the paper, participants will have an understanding of core marketing principles that drive the creation of marketing value in  the modern marketplace. How this paper will be taught Delivery Mode This paper will be offered in a FLEXI format where students can participate on-campus in Hamilton or online via Zoom synchronously. We expect students to participate in all sessions synchronously and some assessments may be scheduled at specific times. You are expected to participate fully in group work and assessments for you to be able to meet the requirements and expectations of this paper. All supportive online resources and class recordings will be available via Moodle. **If you cannot attend a session due to illness or another legitimate reason, you will be expected to watch the session's recording in your own time. Please note, some in-class activities will not be available after the fact. Readings are expected to have been completed before each session to enable all students to engage with the learning. Students   are encouraged to express their own experiences, reasoned opinions, and questions for discussion in the class room. The lecture/work group sessions help: a) clarify any questions you may have regarding marketing and the paper resources b) highlight important marketing theories, concepts, issues and practices c) bring marketing principles to life by using examples. What you will study Topic Introduction / Orientation to Paper Consumer Behaviour Segmentation Targeting Intro to Brands Brand Positioning Brand Management Special Session TBD Frameworks for Marketing Strategy Presentations

$25.00 View

[SOLVED] CMT304 Functional ProgrammingR

Module Code: CMT304 Module Title: Programming Paradigms Assessment Title: Functional Programming Assignment Consider a small binary image (or 2D array or matrix) that is represented as a list of lists which contains only the numbers 0 or 1, e.g., [[0,0,0,0,1,1],  [1,1,1,1,1,0],  [1,1,0,0,1,0],  [1,1,0,0,1,1],  [1,0,1,1,1,1]] We wish to find the number of pixels in the largest connected component of such images (there can of course be more than one component with the same largest number).  A connected component is a cluster of pixels that contain the same value and there is a path from each pixel to each other pixel inside that cluster. A pathis formed from a start pixel by moving either horizontally (one element left or right in the same inner list) or vertically (one list up or down in the outer list without changing the position in the inner list) to the next pixel until the end pixel is reached (this is 4-pixel connected,i.e. no diagonal movement). The number of elements in the largest connected component for the value 0 in the above example is 4 (among the 4 components). It is 19 for the value 1 (there is only one component). Task 1: Write an efficient Haskell function nlcc  l  v that finds the number of elements in the largest connected component of the binary image (list of lists) l for the value v.  Note, there are multiple, more or less efficient algorithms to solve this problem – make sure you clearly document your approach.  Also note, you must write a function, not a full program (so no main, etc.) and it must have the above name with two arguments (failing to do so may result in 0 marks for this task). Make sure your Haskell code can be compiled/interpreted without errors (otherwise 0 marks may be assigned for this task). Note that you must write your own code to solve this problem and not just call a library function, or copy code from some other source (independent of plagiarism issues, even if you reference; you only get marks for your own work).  You may use the standard libraries listed in the Haskell 2010 language report, but not any other libraries (otherwise the code will be treated as not compilable/interpretable, which may result in 0 marks for this task). Task 2: Write a report of up to 500 words (this is an upper limit, not a target) as described below: (a) Discuss one feature of the functional programming paradigm that is useful to solve this problem and compare it to another paradigm of your choice that does not have this feature. (b) Discuss one feature of the functional programming paradigm that makes it difficult to solve this problem and compare it to another paradigm of your choice that would make it simpler. Clearly indicate which of the two points above you address. Make sure you discuss only one feature per point above; only the first feature you discuss for each point will be considered. Learning Outcomes Assessed •  Explain the conceptual foundations, evaluate and apply various programming paradigms, such as logic, functional, scripting, filter-based programming, pattern matching and quantum computing, to solve practical problems. •  Discuss and contrast the issues, features, design and concepts of a range of programming paradigms and languages to be able to select a suitable programming paradigm to solve a problem.

$25.00 View

[SOLVED] PADM-GP 4119 Data Visualization and Storytelling Spring 2025R

PADM-GP 4119 Data Visualization and Storytelling Spring 2025 Course Description In our increasingly data-reliant and data-saturated society, people who understand how to leverage data to generate insights have the power to change the world. Data visualization and storytelling is a crucial skill for policy and data analysts, communications and marketing professionals, and managers and decision-makers within nonprofits, social organizations, and the government. With the advent of visualization tools that do not require coding, data storytelling in the digital age is also an attainable skill set for people with varying levels of technical ability. This hands-on introductory course will teach students how to develop meaningful data stories that reveal visual insights accessible for relevant audiences. Students will also learn the basics of Tableau, the industry standard in data visualization tools, to make sense of and visualize publicly available data. Students will leave the course with a portfolio of data visualization projects, analog and digital, that demonstrate the application of data storytelling. This course is intended for a beginner in data visualization and storytelling. Students with extensive prior experience should consult the instructor before enrolling. Course and Learning Objectives By the end of the course, students should be able to: 1.   Evaluate and critique data visualizations to become better consumers of data. 2.   Gain experience with presenting data insights through visualizations. 3.   Understand and apply data visualization and storytelling best practices to communicate accessible and meaningful insights. 4.   Develop meaningful data stories, gaining experience with the iterative process of data storytelling. 5.   Construct captivating and engaging visualizations, dashboards, and stories in Tableau. Learning Assessment Table Graded Assignment Course Objective Covered Participation All Lab Sessions #1, #3 and #5 Data Viz Critique #1 and #2 Analog Data Viz Project #3 and #4 Final Viz Project #1, #3, #4 and #5 Class Policies This is a fast-paced, hands-on course with a lot of material condensed into seven weeks. Students should be mindful of the following expectations to ensure that they are benefitting from the sessions and achieving intended learning objectives: ●   Attendance for the entire class session for all seven sessions is mandatory. Students should not register for the class if they anticipate any conflicts. ●   Active engagement during the sessions is essential. This course is designed to be a largely practice-based course. Students will maximize class learning if they come prepared having completed their assigned reading and training materials, developed a basic knowledge and theory of the weekly session topic, and are ready to engage during the course discussions, labs, and recitations. ●    Deeper engagement with the content outside of the class sessions will be needed to ensure students are able to complete assignments and projects successfully. Due to the condensed nature of the course, students will need to put in additional time outside of class sessions and should plan accordingly. You are permitted to use generative AI tools in your written assignments, as long as you disclose the tool you used and any related prompts (including system prompts or other customization). Please note that the onus for ensuring quality and accuracy of any output from a genAI model is entirely up to you –– you are ultimately held responsible for what you submit as your work in this class. Your work – especially your written work – must utilize the vocabulary and conceptual material that we introduce in lecture. Required Materials Readings: There is no textbook requirement for this class. Required readings will come from noteworthy articles, blogs and book excerpts; all materials are available online via hyperlinks on this syllabus or on our [SP25] Google Drive . Software: To ensure successful lab/recitation participation, students are required to: ●    Have downloaded a Tableau Desktop license on your laptop (students are eligible for a free one-year license). ●    Ensure you have Microsoft Excel or Numbers on your laptop. ●   Sign up for a Miro for Education (Student) account. Course Components Readings This course is designed to be a largely practice-based course. Therefore, it is crucial to come prepared to class with the basic knowledge and theory needed to have interactive discussions and a hands-on lab. (See Detailed Course Overview for more information for each week.) All materials are available online via hyperlinks on this syllabus or on our [SP25] Google Drive. Students must read assigned chapters/articles before coming to the respective session. Orienting Discussions Most course sessions will begin with a brief orienting discussion to recap best practices and lessons on data visualization and storytelling. Each discussion will build on the assigned reading material for that week and should be an opportunity to deepen knowledge and clarify questions. Labs and Recitations Most course sessions will include an experiential lab session. Students will also have an opportunity to hone their Tableau skills during a hands-on recitation immediately following each course session. To ensure successful lab/recitation participation, students are required to: ●   With the exception of Week 1, please complete all readings, pre-work assignments, and deliverables before class. ●    Ensure you have downloaded a Tableau Desktop license on your laptop (students are eligible for a free one-year license). ●    Ensure you have Microsoft Excel on their laptop. Assignments Assignments are formative, intended to help students understand data viz tools and best practices. They consist of completion of lab-related deliverables, writing a data viz and dataset critique blog, and storyboarding the final project. Details on each assignment will be provided in the previous class session. Projects Unlike the formative assignments, projects are intended to assess mastery over data viz content and skills. Evaluation information can be found under Assessment Assignments and Evaluation. Projects will be uploaded via the blog tool on NYU Brightspace. (1) Analog Data Viz Project Students will create and present an analog “data postcard” by collecting and hand drawing data they collect over the course of several days/a week (see the Dear Data project for more information/ideas). This project is intended to reinforce the importance of communicating data insights effectively and creatively irrespective of the medium/tool. As students will not be using Tableau, students should be especially mindful about visualization execution (i.e., best  practices on chart types, color schemes, legends, so on). You will still be expected to submit your data analysis in Excel in addition to your analog data viz. (2) Individual Final Project All students must create a data story using Tableau that demonstrates their data visualization and storytelling skills through the course. While students are given free rein on content and execution, all data stories must contain at least three visualizations using Tableau Story Points. Data stories  must also serve one of two goals: to help the intended audience make data-driven decisions or to convey meaningful impact information to an intended audience. An accompanying blog post should briefly contextualize the data story and explain how it achieves one of the two intended goals. Students will learn more about the final project during Week 4. To ensure that students are on track with their final project, the following completion deliverables will be enforced: ● Week 05: Finalize final project topic and data set; bring storyboard idea (we will do a storyboarding workshop during the class session). ● Week 06: Come to class with a rough Tableau workbook of your final project (there will be an opportunity to ask questions during class), and a Miro board of your storyboard. ● Week 07: Final projects due. Assessment Assignments and Evaluation Participation (15%): Students are required to attend all class sessions and come prepared for and actively participate in class. All students will begin with the full 15 points. If students miss class or are unprepared for a class session, a maximum of 3 points will be deducted each session. Given the remote nature of this semester, active participation will include asking/answering questions during the session (including   in chat) as well as contributing to discussion in breakout groups. Please contact the instructor if any issues arise during the semester. Participation in recitation sessions is strongly encouraged and will help students develop their Tableau skills, but will not be counted toward your Participation grade. However, hands-on exercises in recitations 2 and 4 count toward Tableau lab assignments and should be completed/submitted in  NYU Brightspace, regardless of recitation attendance. Homework Assignments (30%): Assignments will be split into three components: ●   Tableau lab worksheets/workbooks (10%) – Graded on a 100-point scale based on completion. ●    Data viz critique blog post (10%) – Graded on a 100-point scale based on completeness and demonstrated understanding (see rubric on page 7). ●    Final project draft (10%) – Graded on a 100-point scale based on completion. All homework assignments should be submitted via NYU Brightspace by the beginning of class on the specified due date. Late assignments will have 10 points deducted for everyday it is late (even if submitted the same day but after class, 10 points will be deducted). If you receive a zero on a homework assignment, you can resubmit one homework assignment per semester for a maximum of 50% the total value of the assignment. Analog Data Viz Project (25%): The project will be evaluated on two components: completion of the project, including a presentation during class and the analog data viz. The data viz evaluation rubric can be found on page 8. The presentation should explain the data story in a compelling, clear, and effective manner. Be sure to share your data file in addition to the viz. Students will have 2-3 minutes to present their data story to the class. Make sure to share details on your process in addition to the image of your analog data viz during your presentation. Final Project (30%): The final project will be evaluated on several components: the data story, the orienting blog post and presentation. The evaluation rubric can be found on page 9. Detailed instructions will be in our [SP25] Google Drive.

$25.00 View

[SOLVED] EXMBM511-24C HAM Communication and Collaboration in OrganisationsHaskell

EXMBM511-24C (HAM) Communication and Collaboration in Organisations What this paper is about Kia Ora! Welcome to EXMBM511 Communication & Collaboration. Communication and Collaboration is a practical, hands-on paper designed to equip students with the essential skills of effective communication and teamwork. Throughout the course, students will engage in real-world assessments, learning how to convey ideas clearly, listen actively, and work collaboratively across various contexts. By honing these skills, students will be better prepared to navigate and  succeed in today’s interconnected and team-oriented work environments. This paper not only builds confidence in personal expression but also cultivates an understanding of the dynamics that drive successful collaboration. A core part of this paper is teaching you how to communicate ideas effectively to groups. This is to help you throughout your MBM, where you will be expected to present regularly. How this paper will be taught This class is taught with an active-learning philosophy. This means that a lot of your learning depends on your engagement with the materials, your lecturer, and your peers. You will prepare for sessions by reading/watching/absorbing prescribed articles, case studies, videos, TED talks, and other resources, and thinking about the issues raised in them. Each three-hour session will be a blend of lecturing, workshop activities, discussions, and more. It is expected that every student will prepare for every session ahead of time and engage productively in activities and discussions in the session. Engagement doesn’t mean that you have to be loud or take the spotlight, but it does mean that you have to pay attention and participate. Class will be held in-person at Waikato University, with Zoom recordings for students unable to attend due to illness or personal circumstances. There are group presentations throughout this course that must be done in person. What you will study Topic Introduction to the Course Welcome Students Overview of Course & Assessments Group Forming Introduction to Communication Theory Communication Theory #2 Communication Theory from the 1990s; Recent Communication Theory; Applying Communication Theory; Intercultural Communication Negotiation Introduction to Negotiation Theory Negotiation Case Study Negotiation Practices Negotiation Negotiation - Tips & Tricks Negotiation - Ethics Communication Theory #3 Communication in a Digital Age Communication & Leadership Communication Practices Collaboration #2 Collaboration Approaches Dealing with Conflict Effective Leadership Group Presentation Groups Presenting Capstone Projects Communication Quiz Students will complete the Communication Quiz Collaboration Introduction to Collaboration Cross-Cultural Collaboration Collaboration Practices

$25.00 View