CS152 Project 4: Penguin Population Viability Analysis (PVA) For project 4 we're going to move to the world of ecological modeling. In particular, we're going to model the change in the population of Galapagos Penguins over time given certain ecological pressures. Our goal is to model the risk of extinction over some number of years. The model we will use is fairly simple, but you can choose to add complexity to the model once the basic system is working. One new aspect for this project is the use of random values in the simulation. Because the world doesn't exhibit strong regularity, we're going to model this uncertainty by using a random number generator. For example, El Nino, an important factor in Galapagos Penguin survival, does not occur on a perfectly regular schedule, but it does tend to occur at least once every 5-7 years. We can model this by assigning a 1.0/5.0 or 1.0/7.0 probability of an El Nino to any given year. Then, each year of the simulation, we effectively roll a 5-sided or a 7-sided die and declare an El Nino year if it comes up with a 1. If the year is an El Nino year, then the penguin population drops significantly. If it is not an El Nino year, the population sees a small amount of growth. Because of the random variables in the model, each time we model 100 years the computer will generate a different result. In one simulation, the population may go extinct in 20 years, in another simulation it may not go extinct at all. In order to achieve a reasonable estimate of the true probability of extinction over 100 years, we have to run the simulation many, many times and aggregate the results. For example, the probability of extinction in 100 years is the number of simulations where the population goes extinct, divided by the total number of simulations. Researchers using PVA to evaluate the risk of extinction in actual populations might run the simulation 10,000 times or more in order to get a stable estimate of extinction risk. Stability, in this case, means that if you run the simulation 10,000 times, then the difference between that average and the average of a different run of 10,000 simulations will be small. To design the simulation, we're going to use a modular and hierarchical design in order to keep each function simple to write, test, and debug. By following this workflow we will maintain our momentum as developers and not get bogged down with code that does execute properly. Outline of Project 4 Program Development Part 1. Developing a set of simulation functions A. Write a function to set up the parameters for the simulated population using the random module and test it. (initPopulation). B. Write a function to simulate population in a single year and test it (simulateYear) C. Write a function to run a single year of the simulation and test it (runSimulation) D. Write your main ()function that takes command line arguments and tests it. Part 2. Calculate the Cumulative Extinction Probability Distribution (CEPD) A. Write a function that takes the result of your simulation and test it (computeCEPD) B. Compare the extinction distribution curves for a 3-year, a 5-year, and a 7-year El Nino cycle. Tasks T1. Setup If you haven't already set yourself up for working on the project, then do so now. Navigate to your project folder. Open VSCode and create a new file penguin.py. For this week, all of the simulation functions will be in this file, though you can use the functions in your stats.py file from Lab 3 to analyze the results, if you choose to pursue an extension that requires it. T2. Write a function to initialize the population Create a function initPopulation, which takes two parameters: the initial population size and the probability of an individual being female. The function should return a list of the specified size. Each entry in the list will be either an 'f' or an 'm'. The pseudocode would be: 1. Define a function initPopulation with two parameters (e.g. N, and probFemale) 2. Initialize a variable (e.g. population) to the empty list. 3. Start an indexed for loop based on the size of the population. On each iteration generate a random number using random.random(). If the value is less than the probability of an individual being female (probFemale), append an 'f' to the population list. Otherwise, append an 'm' to the population list. 4. Finally, return the population list. Use the following test function to see if your initPopulation function is working properly. It should print out a list of 10 individuals with an appropriate mix of 'f' and 'm' elements. Run it a few times to convince yourself that you are getting randomized lists. If the lists are all one or the other sex then go back and review your code. # test function for initPopulations def test (): pop size = 10 probFemale = 0.5 pop = initPopulation (pop size, probFemale) print ( pop ) if __name__ == "__main__": test() T3. Write a function to simulate a single year Create a function simulateYear that takes six parameters in the following order: ● pop: the population list ● elNinoProb: the probability of an El Nino ● stdRho: the growth factor in a regular year. This number is meant to allow the population to grow each year and is expected to be greater than 1. ● elNinoRho: the growth factor in an El Nino year. This number is meant to reduce the population and is therefore less than 1. ● probFemale: the probability of a new individual being female, ● maxCapacity: the max carrying capacity of the ecosystem. The first step in the function is to determine if it is an El Nino year. Set a variable (e.g. elNinoYear) to False. Then compare the result of random.random() to the El Nino probability. If the random.random() result is less than the El Nino probability, then set elNinoYear to True.The second part of the function is a loop over the existing population list. Before starting the loop, create a list called newpop and set it to the empty list. Inside the loop, implement the following algorithm given in pseudocode below: #newpop = [] #for each penguin in the original population list # if the length of the new population list is greater than maxCapacity # break, since we don't want to make any more penguins # if it is an El Nino year # if random.random () is less than the El Nino growth/reduction factor # append the penguin to the new population list # else # append the penguin to the new population list # if random.random () is less than the standard growth factor - 1.0 # if random.random () is less than the probability of a female # append an 'f' to the new population list # else # append an 'm to the new population list #return newpop T4. Test the simulateYear function To test your simulateYear function, add the following code to your test function. newpop = simulateYear (pop, 1.0, 1.188, 0.41, 0.5, 2000) print ( "El Nino year" ) print ( newpop ) newpop = simulateYear (pop, 0.0, 1.188, 0.41, 0.5, 2000) print ( "Standard year" ) print ( newpop ) You should see the population reduce to 3-6 individuals in the El Nino year and grow to 11-14 individuals in the standard year. Run it a few times to see the variation. T5. Write a function to run a single simulation The next step is to write the runSimulation function. This function takes in 8 parameters. You can use the definitions below: def runSimulation( N, # numb of years to run the simulation initPopSize, # initial population size probFemale, # prob a penguin is female elNinoProb, # prob El Nino occurs in a given year stdRho, # pop growth in non-El Nino year elNinoRho, # pop growth in an El Nino year maxCapacity, # max carrying capacity of ecosystem minViable ): # min viable population The function should initialize a population, then loop N times. Inside the loop it should call simulateYear and assign the return value back to the population list. If, after simulating a year, the population is smaller than the minimum viable population or it consists of only one gender, then the function should return the year of extinction. If the loop completes all N times and there is still a viable population, the function should return N. The detailed pseudocode is below: The function should first assign to a variable (e.g. population) the result of calling initPopulation with the appropriate arguments. Then it should set another variable (e.g. endDate) to N (the number of years to run the simulation). The variable endDate will be what the function returns. The main part of the function should be a loop that runs N times. Each time through the loop it should: 1. call simulateYear with the appropriate arguments, assigning the result to a new list variable (e.g. newPopulation), 2. test if there is a viable population, and 3. assign to population the new population A viable population must have at minViable individuals and there has to be at least one male and one female. If any of these tests fails, then set endDate to the loop variable value and break out of the loop. The Python keyword break will cause execution to stop looping and start executing the code after the loop If you want to view the population numbers over time, print out the length of the population list at the end of each loop. T6. Test runSimulation Add some test code to your test function that calls runSimulation a few times with some appropriate arguments. Make sure the initPopSize argument is larger than the minViable argument. Using a small value for the probability of an El Nino (e.g. 0.1) should result in a return value of N (the population should survive). If you use a large probability of El Nino (e.g. 0.5) the return value should be much less than N. Default values for the simulation arguments are as follows: N 201 Initial Population Size 500 Probability of Females 0.5 Probability of an El Nino 1.0/7.0 Growth Factor in a standard year 1.188 Growth Factor in an El Nino year 0.41 Maximum Carrying Capacity 2000 Minimum Viable Population 10 T7. Define a main function that runs many simulations The top level function is your main function. Give it one argument, argv, which will be the list of strings from the command line. A. Usage statement Start the main function by testing if there are at least three arguments on the command line. The first argument will be the name of the program, the second should be the number of simulations to run, and the third should be the typical number of years between an El Nino event. If there are less than three (len (argv) < 3), print out a usage statement and exit. B. Extract values from the command line arguments After the test for arguments, cast the second argument (argv [1]) to an integer and assign it to a variable that specifies the number of simulations to run (e.g. numSim). Cast the third argument (argv [2]) to an integer and assign it to a variable that specifies the typical number of years between an El Nino event. C. Set up local variables Create variables for each of the simulation parameters in the table above not already assigned and give them the default values. You should also create a variable to hold the results of the simulations and initialize it to the empty list. D. Write the main loop that runs simulations Loop for the number of simulations. Inside the loop, append to your result list the result of calling runSimulation with the appropriate arguments. This list will hold the year of extinction for each simulation run. E. Calculate the probability of survival after N years Count how many of the results in the result list are less than N (the number of years to simulate). Divide that count by the total number of simulations and print out the result. This is the overall probability that the penguins will go extinct within the next N years. Test it: Run your program using 100 simulations and see what you get. Does it change a lot if you run it repeatedly? Run it with 1000 simulations and see if the results are more stable. T8. Compute the Cumulative Extinction Probability Distribution The next task is to write a function that takes the list of results, which is a set of numbers indicating the last year in which the population was viable, and convert it to a cumulative extinction probability distribution (CEPD). The CEPD will be a list that is as long as the number of years in the simulation. Each entry Y in the CEPD is the number of simulations where the population has gone extinct by year Y divided by the total number of simulations. The computeCEPD function should take two arguments: the list of results from runSimulation (a list of integers), and the number of years the simulation ran (N). The computeCEPD function has three parts. a. The first part is to create a list (e.g. CEPD) with N zeros. You can do this by appending N zeros to an empty list in a loop. b. The second part is to loop over the list of results (extinction years). If the extinction year is less than N, loop from the extinction year to N and add one to each of those entries in the CEPD list. c. The third part is to loop over the CEPD list and divide each entry by the length of the extinction year results list, which is also the number of simulations. After this, return the CEPD list. Add a call to the computeCEPD function to the end of your top level function. Then have your top level function print out every 10th entry in the CEPD list (remember how the range function works). Run your penguin simulation 1000 times using the standard parameters. Repeat the process three times and show the results in your writeup. How much variation is there in the results? Required plot 1: Generate three plots of the CEPD for three runs of 1000 simulations for 201 years with the default parameters. T9. Compare 3, 5, and 7-year El Nino cycles The final step is to use your simulation to compare the extinction distribution curves for a 3-year El Nino cycle, a 5-year El Nino cycle, and a 7-year cycle. What do your results indicate? Required plot 2: a plot of the CEPD for the three El Nino cycle options. Required Element 1: Follow-up Questions 1. What is the difference between the following two code snippets? # example 1 a = [5, 10, 15, 20] for i in range(len(a)): print(a[i]) # example 2 a = [5, 10, 15, 20] for x in a: print(x) 2. Why do we test code incrementally? Why not write all of the code and test it once? 3. Why do we use random numbers in this population model? 4. What is your favorite wild (undomesticated) animal? Extensions Extensions are your opportunity to customize your project, learn something else of interest to you, and improve your grade. The following are some suggested extensions, but you are free to choose your own. Be sure to describe any extensions you complete in your report. ● Use matplotlib to automate the process of creating plots. ● In addition to the required plots, show a plot of the population levels for a single simulation. ● Add other model parameters to the command line. The complex version of this extension is to have flags for your program so that the user can use a flag, like -E, to specify a given parameter. For example, running python penguin.py -E 5 would modify the El Nino frequency, but leave all other parameters at their default values. ● Display the CEPD data in a graph using matplotlib, or have your code write out the results as a CSV file and use a graphing program. To write the data, adjust the code in penguin.py so that it prints out the year and CEPD for all years. Redirect the output to a .csv file. Use Excel or other software to plot the data and post the image on the wiki with a thorough description of its contents. ● Explore variations on the model. For example, what effect on the extinction probability do you see if you adjust the number of years in the El Nino cycle? Does changing the carrying capacity to 3000, for example, modify the 100-year extinction probability for the 5 year El Nino cycle? ● More rigorously test the variation in the simulation outcomes. Use your standard deviation function to compare the standard deviation of the probability that the population goes extinct at a particular time. For example, consider 200 years when the El Nino cycle is 5 years. Running the simulation with numSim=100 will likely yield a larger standard deviation in the values of the CPEF at t=200 (the final year of the simulation) than running the simulation with numSim=1000 will. Submit your code Turn in your code by zipping the file and uploading it to Google Classroom. When submitting your code, double check the following. 1. Is your name at the top of each Python file? 2. Does every function have a docstring (‘’’ ‘’’) specifying what it does? 3. Is your Lab 04 folder in your Project 04 folder? 4. Have you checked to make sure you have included all required elements and outputs in your project report? 5. If you have done an Extension, have you included this information in your report under the Extension heading? Even if you have not done any extensions, include a section in your report where you state this. 6. Have you acknowledged any help you may have received from classmates, your instructor, the TAs, or outside sources (internet, books, videos, etc.)? If you received no help at all, have you indicated that under the Sources heading of the report? Write your project report Reports are not included in the compressed file! Please don’t make the graders hunt for your report. You can write your report in any word processor you like and submit a PDF document in the Google Classroom assignment folder. Or just use a Google Document format. Review the Writeup Guidelines document. Your intended audience for your report is your peers who are not taking CS classes. From week to week, you can assume your audience has read your prior reports. Your goal should be to explain to peers what you accomplished in the project and to give them a sense of how you did it. The following is a list and description of the mandatory sections you must include in your report. Do not include the descriptions in your report, but use them as a guide in writing your report. ● Abstract A summary of the project, in your own words. This should be no more than a few sentences. Give the reader context and identify the key purpose of the assignment. An abstract should define the project's key lecture concepts in your own words for a general, non-CS audience. It should also describe the program's context and output, highlighting a couple of important algorithmic and/or scientific details. Writing an effective abstract is an important skill. Consider the following questions while writing it. ○ Does it describe the CS concepts of the project (e.g. writing well-organized and efficient code)? ○ Does it describe the specific project application (e.g. generating data)? ○ Does it describe your solution and how it was developed (e.g. what code did you write)? ○ Does it describe the results or outputs (e.g. did your code work as expected and what did the results tell you)? ○ Is it concise? ○ Are all of the terms well-defined? ○ Does it read logically and in the proper order? ● Methods The method section should describe in clear sentences (without pasting any code) at least one example of your own computational thinking that helped you complete your project. This could involve illustrating how a key lecture concept was applied to creating an image, how you solved a challenging problem, or explaining an algorithmic feature that is essential to your program as well as why it is so essential. The explanation should be suitable for a general audience who does not know Python. ● Results Present your results in a clear manner using human-friendly images or graphs labeled with captions and interpreted for a general audience such as your peers not in the course. Explain, for a general, non-CS audience, what your output means and whether it makes sense. ● Reflection and Follow-up questions Draw connections between lecture concepts utilized in this project and real-world problems that interest you. How else could these concepts apply to our everyday lives? What are some specific things you had to learn or discover in order to complete the project? Look for a set of short answer questions in this section of the report template. ● Extensions (Required even if you did not do any) A description of any extensions you undertook, including text output or images demonstrating those extensions. If you added any modules, functions, or other design components, note their structure and the algorithms you used. ● References/Acknowledgements (Required even if there are none) Identify your collaborators, including TAs and professors. Include in that list anyone whose code you may have seen, such as those of friends who have taken the course in a previous semester. Cite any other sources, imported libraries, or tutorials you used to complete the project.
Course Scheduler Final Project Part 2 This part of the Final Project is a continuation of Part 1 and will add additional functions for both the Admin User and the Student User. This assignment is the last half of the final project and will be submitted as Final Project Part 2. This phase of the project will implement the following additional Admin functions: Display Class List of Students Display all the students who are scheduled or waitlisted for the specified class for the current semester. The scheduled students should be displayed first and then the waitlisted students in waitlist order. Drop Student Remove the student from the list of students and remove them from any classes they are scheduled for in all semesters. For each course they were scheduled in, schedule the first student on the waitlist into the course. Display all of the changes that are made because of dropping the student. Drop Class Remove the specified course from the current semester. Remove all students scheduled and waitlisted for the class from the Schedule Table. Display all of the changes that are made because of dropping the class. This phase of the project will implement the following additional Student function: Drop Class Remove the specified class from the student’s schedule for the current semester. The student may be scheduled for the class or on the waitlist. Schedule the first student waiting for the class into the class. Display all of the changes that are made because of dropping the class. Testing scenario: A testing scenario will be provided to assist you in testing this application. It will be called Final Project Part 2 Test Script in Canvas. Database considerations: Your database will be identical to your database from Part 1. All of the tables should be empty when your project is submitted. GUI Guidelines: The user should be required to enter only unknown data. Drop down lists of known data such as Student names, Course Codes, or Semesters should be displayed for the user to select. Combo Boxes should be used for the drop-down lists on the form. When information is requested to be displayed, e.g., for a Display command, all of the requested information must be displayed. When a command is performed, the results of that command should be displayed to the user on the same display without the user needing to use a Display function to see what was done. Submission Guidelines: Don't forget to submit your zipped PROJECT folder and your zipped DATABASE folder as you did for Part 1. Zip the ENTIRE database folder and the ENTIRE project folder and submit the two zipped files in the assignment under one submission. Note: Your project must be named as it was for Part 1. The database must be named as it was for Part 1. All tables should be empty. Grading Criteria: In this project I will be looking for good OO design practices and this includes: • Use of getter and setter methods for class variables • Good naming of your classes, methods and variables • Correct use of static and non-static methods • The way you split this project into classes. • All of your updates to the database must be done using SQL statements, do not use ResultSetTableModels to update the database. • If a SQL statement to update the database needs to contain a variable, then you must use PreparedStatements, do not use concatenation of strings to create the SQL statement. The Grading Rubric for Final Project Part 2 is posted in Canvas. Note: Make sure you look at all the videos about this assignment and the Course Scheduler Design Layout in Canvas before starting this assignment.
Assignment 7:“…and we aim to show stronger improvements starting next fiscal quarter.” This all-hands call is taking forever, you think to yourself while daydreaming about where you should go for your holiday in two weeks.“…will be taking the lead with our next client. We expect great things from them.” Wait. Did your boss just say your name?“The client is an IDS vendor whose product uses machine learning models to identify malware. However, they have noticed that their models are frequently evaded and hope that we can find out why.”You don’t remember being told about this, but at this point, you guess you’re used to it. You’re just thankful your coworker was taking notesActions during the meeting and he gave you someActions tutorialsActions on MLSploit, a framework you are expected to use. And later, you found some information about the attack. AssignmentThe purpose of this assignment is to gain experience with training machine learning (ML) and deep learning (DL) models classifying Windows portable executable (PE) malware into families. Specifically, the models will be given two different datasets: benign PE files and malicious PE files from multiple families. After training a DL models, you will attack those models using an evasion attack called the Mimicry Attack. Then, you will be tasked with improving the models which were attacked. Finally, you will train a ML model using different features and see if the mimicry attack still work. You will write a report about your experiences and observations.Note: Please use https://cs6264-4.gtisc.gatech.eduLinks to an external site. instead of https://mlsploit.orgLinks to an external site.. We will make a few more server sites available if needed. There are 5 tasks and a bonus task you will need to complete for this assignment. They include:You will also need to compile a report (40%) that should contain screenshots of your findings and explanations for why the certain screenshot happened. For example, if your screenshot is comparing the results of how well different models detected the attack in Task 2, then an explanation for why the results differed should be included.To complete the tasks, you will also need these files Download these files. Supplementary Material:Lab 7_Supplementary_Material.pdfActions, Task 3 TemplateDownload Task 3 Template DeliverablesCompress the deliverables for each task into a .tar.gz file called [GT Username]_cs6264_lab07.tar.gz with the following directory layout: Warning:Warning: The malware binary we provide you (and the malware produced by MLSploit) is real malware. Do not under any circumstances execute these malware EVER. It is a compiled form of the rbot malware family and antivirus companies are well-aware of their existence (https://github.com/ytisf/theZooLinks to an external site.). We have not applied any static obfuscation to them so they should be easily detectable by AV companies. You are to use these binaries responsibly by only reading their byte contents (e.g., using tools like https://github.com/erocarrera/pefileLinks to an external site.).PreviousNext
MUAR 201 Final Assignment Due: December 14, 11:59 pm Choose one of the following options for yourfinal assignment. Option 1: Compose a piece or write a song incorporating the following parameters: • At least 16 measures long. You may use whatever time signature you want o It must include a melody and accompaniment (or polyphony, ifyou are so inclined). o It must use at least three different chords that we studied this semester (it cannot be atonal) o It must use the music notation we learned this semester. If you write a song, make sure to write out the melody • Write out roman numerals that explain what the harmonies are doing in your composition. Make sure to indicate the inversions. Option 2: Analyze the musical excerpt on the next page: • Use roman numeral analysis, making sure to label the inversions with the appropriate figures • In the melody (mostly in the top staff), label all non-harmonic tones with the letter n above each. from W.A. Mozart: Sonata inC, K545, I. Allegro: measures 59-73 CM:
Intermediate Macroeconomic Theory II, Fall 2024. Problem Set 2. Due by December 2. 128 points. 1. (20 points) An employee has to choose between two contracts. Assume that the net real interest rate on saving and borrowing equals r > 0. Under contract A, she has gross incomes y and y' in the current and future periods, respectively, and has to pay taxes t and t' in the current and future periods, respectively. Under contract B, an employer offers the employee an option to increase income next year by x·(1 +r) units and reduce income this year by x units. Taxes are the same under both contracts. (a) (10 points) Write down current and future budget constraints and the lifetime budget constraint under the two contracts. Which contract would the employee choose and why? (Hint: you should compare lifetime wealth under the two con-tracts.) (b) (10 points) Assume that preferences over current and future consumption are U(c, c') = −2 1 (c − c¯) 2 − 1 2 β(c 0 − c¯) 2 , where ¯c is the bliss consumption level and β = 1+r/1. Find consumption in the current and future periods and saving under the two contracts. Compare consumption levels and saving under the two contracts. 2. (18 points) Assume a consumer has current-period income y = 120, future period income y' = 150, current and future taxes t = 60 and t' = 50, respectively, and faces a market real interest rate of r = 0. Consumer’s preferences over current and future consumption are U(c, c') = min (c, c'). The consumer faces a credit-market imperfection in that she cannot borrow at all, that is, s ≥ 0. (a) (6 points) Calculate her optimal c, c', s. (b) (6 points) Suppose that everything remains unchanged, except that now t = 40 and t' = 70. Calculate the effects on current and future consumption and optimal saving. (c) (6 points) Calculate the marginal propensity to consume for this consumer fol-lowing the tax change, that is, the change in the current consumption following the change in taxes and disposable income that it entails. Define the Ricardian equivalence and comment if it holds in this case. 3. (50 points) Consumer has quadratic preferences and cares about consumption over two periods: U(c0, c1) = −2/1(c0 − c¯) 2 − β2/1(c1 − c¯) 2 . Assume that the real interest rate, r, is 1 9 , and the time discount factor, β, equals 0.9. (a) (7 points) Consumer’s disposable income in period 0 equals 10, and in period 1 equals 20. There’s no uncertainty. Write down the Euler equation and find the optimal consumption levels in periods 0 and 1, and the optimal savings. (b) Assume now that period 0 income stays at 10, while period 1 income is uncer- tain. There are two possible states of nature that might realize in period 1—with probability π = 3/1, income will equal 0 in period 1 if state 0 occurs whereas with probability 1 − π = 3/2 income will equal 30 in period 1 if state 1 occurs. Con-sumer has to make decision about her consumption and saving for period 0 before uncertainty is resolved. Consumer now maximizes expected utility EU(c0, c˜1) = −2/1(c0 − c¯) 2 − πβ2/1(c1(0) − c¯) 2 − (1 − π)β2/1(c1(1) − c¯) 2 , where c1(k) is consumption in period 1, state k = 0, 1. (i) (3 points) Write down the Euler equation and find the expected value and variance of income in period 1. (ii) (6 points) Find the optimal consumption and saving in period 0, and con-sumption in period 1 in both states of nature. (iii) (1 point) Does your answer for the optimal consumption in period 0 and savings differ from the answer to (3a), and why it does or why it doesn’t? (c) Assume now that income in period 1 state 0 equals 0 with probability π = 0.99 and income in period 1 state 1 equals 2000 with probability 1 − π = 0.01. (i) (3 points) Write down the Euler equation and find the expected value and variance of income in period 1. (ii) (6 points) Find the optimal consumption and saving in period 0, and con-sumption in period 1 in both states of nature. (iii) (1 point) Does your answer for the optimal consumption in period 0 and savings differ from the answer to (3b), and why it does or why it doesn’t? Assume now that each period’s utility function is u(c) = ln(c). Continue assuming that the real interest rate, r, is 1/9, and the time discount factor, β, equals 0.90. (d) (7 points) Write down the Euler equation and find the optimal consumption in periods 0 and 1 and optimal saving in period 0 given the data in (3a). (e) (8 points) Write down the Euler equation and find the optimal consumption in periods 0 and 1 and optimal saving in period 0 given the data in (3b). Compare the optimal saving to the value you found in (3b) and argue why they are different (if different at all). (f) (8 points) Write down the Euler equation and find the optimal consumption in periods 0 and 1 and optimal saving in period 0 given the data in (3c). Compare the optimal saving to the value you found in (3c) and argue why they are different (if different at all). 4. (18 points) Suppose there is a credit market with the fraction of a good borrowers, and the fraction of 1 − a bad borrowers, with the total number of borrowers equal Nb. Banks cannot differentiate between good and bad borrowers when making loans (asymmetric information) and loan out l units of goods to each borrower, good or bad. There are Nd depositors/savers in the economy. Banks attract deposits in the amount of L from each of them and promise to pay a net real interest rate of r1 to depositors. Banks charge net interest r2 on loans. Good borrowers are identical and always repay their loans, while a debt collection agency makes bad borrowers pay a fraction 0 ≤ f ≤ (1 + r2) of their loans (who, in the absence of the agency, would pay nothing). The banking sector is competitive, and the profit equals zero in equilibrium. (a) (5 points) Using the bank balance sheet, find the relationship between Nd, Nb, l, and L. (b) (10 points) Using the assumption of the competitive banking sector, find an expression for the interest rate on loans, r2, made by banks, as a function of a, f, and r1. (c) (1 points) How will the interest rate change if the debt collection agency makes each borrower pay a higher fraction f of their loans taken? (d) (2 points) What must f be for the interest rate on loans to equal r1? 5. (22 points) Consider the short-run model of aggregate economy we studied in class. Aggregate demand (AD) curve Y˜ t = ¯a−¯bm¯ (πt −π¯) was derived from the following two equations: IS curve: Y˜ t = ¯a − ¯b(Rt − r¯) Textbook MP curve: Rt − r¯ = ¯m(πt − π¯) (a) Assume an alternative MP rule: Alternative MP curve: Rt − r¯ = ¯m(πt − π¯) + ¯nY˜ t (i) (6 points) Explain in words what this rule tries to achieve and how it com- pares to the standard textbook case. Derive the aggregate demand equation (AD') under the alternative rule. (ii) (6 points) Plot (AS), the textbook (AD) and (AD') curves on the same graph. State which aggregate demand curve is steeper and why. (Hint: we are plotting πt against Y˜ t .) (b) Assume now there is a temporary positive inflationary shock to the economy (¯o in the AS curve goes from 0 to a positive number temporarily). (i) (4 points) Show how the economy responds over time using the AS/AD framework. (You should clearly label the axes and explain everything you want to show on your graph. You may use either AD or AD' to avoid clut-tering.) (ii) (6 points) Show the path of real interest rates set by the central bank under the two alternative monetary policies. Which policy would result in a more prolonged adjustment of the real interest rate to its long-run value ¯r? (Your answer should be reflected in your graph.)
Discrete Structures (ITS66204) Group Assignment (Weightage: 30%) Case Study Title: Due Date: Sunday, 01 December 2024 (Week 11), 11:59 PM Group Assignment (30%) MLO2: Demonstrate findings and insights derived from applications of discrete structure concepts in real-world and computational science scenarios. CASE STUDY 1: OPTIMIZING A SMART CITY TRANSPORTATION AND LOGISTICS SYSTEM Scenario: You are part of a team hired by a smart city initiative to optimize the city's transportation and logistics system. The goal is to improve traffic flow, reduce congestion, optimize delivery routes, and ensure efficient allocation of resources such as public transport, delivery trucks, and road space. The project will also consider the impact of uncertainty (e.g., traffic delays, accidents) in the system and develop a model to minimize disruptions. You are required to use concepts from Set Theory, Counting Principles, Graph Theory, and Probability to develop solutions that will enhance the efficiency and reliability of the city's transportation and logistics network. Data Sources: For this case study, students will access real-world data from the following platforms: Kaggle: For datasets related to traffic incidents, transportation schedules, and other logistical data. OpenStreetMap: For road network data, including distances, locations, and travel times between different parts of the city. Government Data Portals: For publicly available datasets on traffic volumes, vehicle types, public transport schedules, and traffic conditions. CASE STUDY 1 TASKS Task 1: Traffic Management Using Set Theory The smart city initiative collects traffic data from multiple sensors across the city. These data points are organized based on vehicle types, time of day, road sections, and traffic conditions. Data Source: Traffic data from Kaggle or government data portals, representing vehicle types and congestion levels for various roads at different times of the day. Questions: 1. Set Operations: a) Represent the data as sets, where each set contains data points related to different vehicle types (e.g., cars, trucks, buses), time slots, and traffic conditions (e.g., peak hours, nonpeak hours). b) Perform. set operations such as union and intersection to analyze traffic patterns. 2. Cartesian Product and Relations: a) Define a Cartesian product of road sections and time slots. Use this to establish a relation between different road sections and the severity of traffic congestion. Task 2: Optimizing Public Transport Scheduling (Counting Principles) The city operates a public transport network with multiple bus routes, and the challenge is to develop an optimal schedule that minimizes wait times while ensuring buses are evenly distributed. Data Source: Public transport schedules from Kaggle or government data portals for bus and train services. Questions: 1. Permutations and Combinations: a) Calculate the number of different ways buses can be assigned to routes given specific constraints. 2. Applications of Number Theory: a) Use modular arithmetic to determine optimal time intervals for bus departures on different routes. Note: Other suitable number theory concepts can also be used with proper justification. Task 3: Designing Delivery Routes (Graph Theory) The city’s logistics network needs to optimize delivery routes for trucks transporting goods between distribution centers and retailers across different districts. Data Sources: Road network data from OpenStreetMap, including distances and travel times, and historical traffic incident data from Kaggle. Questions: 1. Graph Representation: a) Represent the logistics network as a weighted graph, where nodes represent locations and edge weights represent travel distances. 2. Minimum Spanning Tree (MST): a) Apply graph theory algorithms to find the minimum spanning tree that optimizes the delivery network. Justify the algorithm(s) selected. Task 4: Probability of Delays and Traffic Disruptions The city wants to assess the probability that deliveries and public transport operations will be completed on time under uncertainties like traffic delays and road closures. Data Sources: Traffic incident data from Kaggle or government portals. Questions: 1. Basic Probability: a) Use real-world data on traffic incidents to model the probability of delays and propose strategies to mitigate their impact. Learning Outcome: Students will demonstrate their ability to integrate set theory, counting principles, graph theory, and probability using real-world data from platforms like Kaggle, OpenStreetMap, and government data portals. They will apply these discrete structure concepts to optimize a smart city’s transportation and logistics system. CASE STUDY 2: OPTIMIZING HEALTHCARE DELIVERY WITH DISCRETE STRUCTURES Scenario: You have been hired by a healthcare company to optimize the delivery of medical supplies and improve scheduling for mobile health units. The company operates several mobile health units that visit various clinics and rural locations, providing healthcare services. They need to ensure optimal routing for the units, efficient scheduling for patient appointments, and reliable supply chain management for medical supplies. Data Sources: Kaggle: Healthcare data, patient appointments, and medical supply chain data. OpenStreetMap: Geographical data for mobile unit routes between health clinics. Government Data Portals: Public healthcare statistics and travel times. Case Study 2 Tasks Task 1: Mobile Unit Scheduling Using Set Theory The company wants to optimize the mobile health unit schedules to ensure they visit the maximum number of clinics in a day while considering different clinic opening hours and vehicle availability. Data Source: Scheduling and clinic availability data from Kaggle or government health portals. Questions: 1. Set Operations: a) Define sets for clinic opening hours and vehicle availability. Use set operations (union, intersection, etc.) to determine optimal schedules for the mobile units. Note: Provide proper justification for the selected set theory operation(s). 2. Relations: a) Establish a relation between the mobile units and the clinics and analyze the constraints. Task 2: Counting Principles in Patient Appointment Scheduling The company wants to optimize appointment schedules to minimize patient wait times and reduce congestion at clinics. Data Source: Appointment data and patient visit records from Kaggle. Questions: 1. Combinations and Permutations: a) Determine how many different ways patient appointments can be scheduled based on clinic capacity and time slots. 2. Inclusion-Exclusion Principle: a) Apply the inclusion-exclusion principle to avoid overlapping appointments at clinics where multiple mobile units operate. Task 3: Routing Medical Supplies Using Graph Theory The healthcare company needs to optimize delivery routes for medical supplies to different clinics. Each clinic has varying demand levels and distance from the central warehouse. Data Source: Delivery route data from OpenStreetMap, combined with demand data from Kaggle. Questions: 1. Minimum Spanning Tree: a) Use the minimum spanning tree to optimize the routes for medical supply deliveries, ensuring minimal travel time while covering all clinics. Note: Provide proper justification for the selected algorithm to find the minimum spanning tree. 2. Shortest Path Algorithm: a) Apply Dijkstra’s algorithm to find the fastest route for high priority supplies. Note: Algorithm(s) other than Dijkstra’s may also be used with proper justification provided. Task 4: Probability of Service Interruptions Traffic delays and unpredictable events such as vehicle breakdowns can disrupt the mobile units’ schedules. The company wants to assess the probability of service interruptions. Data Source: Vehicle incident data and traffic statistics from Kaggle or government data portals. Questions: 1. Basic Probability: a) Calculate the probability that a mobile unit will be delayed due to traffic or vehicle breakdowns based on historical data. 2. Conditional Probability: a) Use Bayes’ Theorem to determine the likelihood that a mobile unit experienced a breakdown if it arrives significantly late to a clinic. Learning Outcome: Students will apply set theory, counting principles, graph theory, and probability to optimize healthcare delivery, using real-world data from Kaggle, OpenStreetMap, and government data portals. CASE STUDY 3: MANAGING SMART WASTE COLLECTION IN A SMART CITY Scenario: The city is implementing a smart waste management system where garbage trucks are equipped with sensors that track the fill levels of waste containers. The goal is to minimize fuel consumption and travel time for waste collection, optimize the assignment of trucks to routes, and manage unpredictable delays caused by traffic. Data Sources: Kaggle: Waste management data, including waste container fill levels. OpenStreetMap: Route data for waste collection points. Government Data Portals: Public traffic and road incident data. Case Study 3 Tasks Task 1: Waste Collection Route Optimization Using Set Theory The waste management system tracks the status of containers at various collection points. You need to assign routes to trucks based on the fill levels of containers. Data Source: Waste container data from Kaggle or government data portals. Questions: 1. Set Operations: a) Define sets for full, nearly full, and empty containers. Use set operations to determine which containers should be prioritized for collection. Justify the selection of set operations. 2. Cartesian Product: a) Create a Cartesian product between collection points and truck availability and use it to assign optimal routes. Task 2: Assigning Trucks to Routes Using Counting Principles The city needs to assign a fleet of trucks to waste collection routes, ensuring that all full containers are collected efficiently. Data Source: Truck availability and route data from Kaggle. Questions: 1. Permutations and Combinations: a) Calculate the number of ways trucks can be assigned to routes given constraints on truck capacity and route length. 2. Inclusion-Exclusion Principle: a) Use the inclusion-exclusion principle to prevent overlapping truck assignments on routes with high traffic. Task 3: Optimizing Waste Collection Routes Using Graph Theory The city’s road network and collection points need to be modeled as a graph to minimize travel time and fuel consumption. Data Source: Route data from OpenStreetMap and traffic incident data from Kaggle. Questions: 1. Graph Representation: a) Model the waste collection points and routes as a graph. Use edge weights to represent travel times or fuel consumption. 2. Shortest Path: a) Apply the shortest path algorithm to determine the most efficient routes for trucks to minimize travel time. Justify the selection of algorithm to find the shortest path. Task 4: Probability of Delays in Waste Collection The city needs to factor in the probability of delays due to traffic congestion or road closures. Data Source: Traffic delay data from Kaggle or government traffic portals. Questions: 3. Basic Probability: a) Calculate the probability that a waste collection truck will encounter delays based on traffic data. 4. Conditional Probability: a) Use conditional probability to model how likely it is that a truck will complete its route on time, given the current traffic conditions. Learning Outcome: Students will optimize a waste collection system by applying set theory, counting principles, graph theory, and probability, using real-world data from Kaggle, OpenStreetMap, and government data portals. CASE STUDY 4: EMERGENCY RESPONSE OPTIMIZATION USING DISCRETE STRUCTURES Scenario: A city’s emergency response system needs to be optimized to improve response time and resource allocation. Emergency services (firefighters, police, and ambulances) are dispatched based on real- time data on incidents such as fires, accidents, and medical emergencies. The city wants to minimize response time, optimize resource allocation, and assess the probability of delays due to traffic and road closures. Data Sources: Kaggle: Emergency incident data and resource allocation data. OpenStreetMap: Road network and travel time data for emergency services. Government Data Portals: Traffic data, road closures, and real-time incident reports. Case Study 4 Tasks Task 1: Set Theory for Emergency Resource Allocation The city needs to allocate emergency resources (e.g., police cars, fire trucks, and ambulances) to incidents based on their severity and location. Data Source: Emergency incident data from Kaggle. Questions: 1. Set Operations: a) Define sets for different types of incidents (e.g., fires, accidents, medical emergencies) and available resources (e.g., fire trucks, ambulances). Use set operations with justification to allocate the appropriate resources to incidents. 2. Cartesian Product and Relations: a) Use the Cartesian product between resource availability and incident locations to determine optimal assignments. Task 2: Counting Principles in Resource Scheduling The city wants to optimize the number of available emergency vehicles and personnel based on historical incident data. Data Source: Resource allocation and incident frequency data from Kaggle. Questions: 1. Permutations and Combinations: a) Calculate the different ways emergency personnel can be assigned to shifts to ensure full coverage during peak hours. 2. Inclusion-Exclusion Principle: a) Apply the inclusion-exclusion principle to avoid assigning too many vehicles to overlapping incidents in high-traffic areas. Task 3: Routing Emergency Services Using Graph Theory The city’s road network must be modeled as a graph to ensure the fastest response times for emergency services. Data Source: Road network data from OpenStreetMap and traffic incident data from Kaggle. Questions: 1. Minimum Spanning Tree: a) Model the city’s emergency response network using a minimum spanning tree to minimize travel time between emergency stations and incident locations. 2. Shortest Path: a) Use Dijkstra’s algorithm to calculate the shortest path for an emergency vehicle to reach a high priority incident. Note: Algorithm(s) other than Dijkstra’s may also be used with proper justification provided. Task 4: Probability of Delays in Emergency Response Traffic conditions and road closures can impact emergency response times. The city needs to estimate the probability of delays based on historical data. Data Source: Traffic delay data from Kaggle or government traffic portals. Questions: 1. Basic Probability: a) Calculate the probability of delays in emergency response based on current traffic conditions and road closures. 2. Bayesian Probability: a) Use Bayes’ Theorem to update the probability of a delay given that an emergency vehicle has already encountered heavy traffic. Learning Outcome: Students will apply set theory, counting principles, graph theory, and probability to optimize emergency response systems, using data from Kaggle, OpenStreetMap, and government data portals.
Secondary Data Analysis Lab Exercise Before you begin these labs, you will need to have figured access to SPSS, either by going to a computer lab on campus, having a computer with SPSS already installed, or accessing Apporto to use SPSS in a virtual interface. See the course Canvas page for the document on accessing Apporto. Secondary Data Analysis Lab Exercise: Descriptive Statistics and Simple Graphs In this lab, you will learn how to import a dataset into SPSS and generate descriptive statistics. These will include frequencies, percentages, and measures of central tendency (mean, median, and mode). Make sure you have the GSS18.sav file open. This first part is a “practice” where you are walked through how to use SPSS. Finding Frequencies, Percentages, Measures of Central Tendency, Measures of Dispersion 1. Navigate through the menus - Analyze > Descriptive Statistics > Frequencies 2. This should open a new “Frequencies” window where you can select your variables a. For this lab, let’s work with: WRKSTAT (Labor Force Status), DEGREE, AGE PRO TIP! The dialogue boxes default to show the variable labels instead of variables names. To change this, navigate through the menus - Edit > Options. On the GENERAL tab find where it says Variable Lists. Click “Display names” and “Alphabetical.” Click okay. Alternatively, in the “Frequencies” dialogue box and similar windows, if you “right click” on your mouse or trackpad, you can select “Display Variable Names” with the same effect. Do this now. 3. To select your variables you must move them from the left box into the right box (either double-click on the variable name or select it and use the arrow between them) 4. After selecting your variables, make sure that the “Display Frequency Tables” option at the bottom of the window is checked. a. Let’s explore some of the other frequency functions in SPSS i. Click the “Statistics” button in the “Frequencies” window 1. A new window should open and you will see a lot of different options. Let’s pay particular attention to the “Central Tendency” box. Let’s check: Mean, Median, and Mode. 2. Click “Continue” to return to the “Frequencies” window ii. Click the “Format” button in the “Frequencies” window 1. This function allows to choose different ways to order your frequency table. For now, we can leave these options alone, but it’s good to know these options exist for future reference. 2. Click “Continue” to return to the “Frequencies” window iii. Now back at the “Frequencies” window, click “OK” 5. SPSS will now open an “Output” window and will generate the requested tables and statistics. Note that percentage and cumulative percentage are both automatically included in the frequency table. NOTE: Percentage is the percent of the total N encompassed by any given value. Cumulative percentage is a cumulative summation of all percentages as you move from one value to the next higher value. Once the highest value is reached, it should sum to 100%. This is useful for knowing what percentage falls below a particular value. Now it is your job to interpret this output!! 6. The output only provides numerical output. In order to make sense of the information, you will need to reference back to the variable information to see what labels match with what numerical values. 7. You will note in your output that SPSS has provided you with all the requested statistics for each of your variables. But are they all meaningful or useful? NO! YOU MUST BE SMARTER THAN THE COMPUTER!!! Remember, each measure of central tendency only works with variables with certain levels of measurement. While SPSS does have a way to mark the level of measurement for variables, they may not always be labelled correctly and regardless of the label, SPSS will still run calculations on any numerical value. a. WRKSTAT is a categorical/nominal variable. At this level of measurement, frequencies and percentages are meaningful as is the mode. All other calculated statistics for this variable in the output is meaningless. You cannot have a mean or median for a categorical/nominal level variable...but SPSS will give it to you if you ask for it because everything is represented by a number. Be careful!! b. DEGREE is an ordinal level variable. At this level of measurement, frequencies, mode, and median are meaningful, but mean here doesn’t really work. You’ll occasionally see a mean calculated for some ordinal variables, but that is not a universal. c. AGE is a ratio level variable. For this level of measurement, all statistics are possible and technically meaningful. However, it is likely that in an interval/ratio level variable, values will not repeat themselves very often and so frequencies and mode are not particularly useful statistics. Creating Graphs In the video, I showed you how to also get simple graphs with the descriptives but if you want to just go in and make standalone graphs, the process is pretty easy 8. To do graphs, at the top you select Graphs and then select Chart Builder. Click okay on the pop-up window. 9. At the bottom left, you’ll see “Choose from:” a. Pick the type of graph you want. Pie/Polar or Bar are classics for nominal/ordinal data. If you’re using an interval/ratio variable, Histogram is great. b. You may see in the box that there are many types of each graph. Click and drag the first one to the spot that tells you to drag it there. 10. Now find your variable in the box on the top left and drag into that blue, rectangle below the graph shape it’s showing. a. Two important notes: for pie or bar charts, on the left you’ll see the word “Count” in a spot that has a downward facing triangle. Click that downward facing triangle and switch it to Percentage (?) 11. Now click okay Final step: exporting your output so you can have the tables/graphs in a Word document so you can use them. 12. On the Output window (where you can see the tables and graphs), click File and then Export. 13. In the middle you’ll see a section that says File Name and on the far left a blue box that says Browse. Click Browse 14. In the pop up, at the top you’ll see “Look in:” and should change that to desktop 15. Give it whatever name you want. I recommend leaving the file type as a Word Document as any word processor can open it. Click Save. And then click Okay. You’ve now exported those tables and graphs into a usable file type to incorporate into papers and presentations. Now it’s your turn to pick some variables. You can just scroll through the dataset or pick some by searching around in the GSS Data Explorer that’s linked on the module. Once we have finished the steps with the sample variables, you need to pick different variables (one for each of the three levels of measurement) on your own and then do the following: 1. What variables did you choose? What is their level of measurement? a. 2. For the nominal or ordinal variable(s): a. A frequency table for each variable b. The meaningful measure of central tendency (nominal: mode; ordinal: mode and median) 3. For the interval or ratio variable(s): a. The summary table that has all measures of central tendency: mean, median, and mode 4. A useful graph for each item a. So you will put three graphs here!
CS 3233-01 Homework #6 Fall 2024 Due: November 22 Assignment Write a C++ program that uses OpenGL to draw an object read from of an OBJ file. You may ignore the content of the file other than these entries: v Vertex coordinates vt Texture coordinates vn Normal vectors f Faces Your program should draw the object(s) from the file in 3D with lighting enabled. You may use one or more light sources of your own choosing. Also, animate your drawing so that the object rotates slowly around the vertical axis, enabling the viewer to see it from all sides. The rotation must be evenly paced and of moderate speed. You may ignore the texture coordinate entries if you wish. I will not provide a texture image when Itest your program. Prompt the user to enter the name of the OBJ file at runtime. I will test your program with an OBJ file of my own choosing. Submission Requirements To receive credit for the assignment, your submission must fulfill these requirements: • It must include all the source code necessary for your application, including code provided by the professor such as Eck’s camera API and Barrett’s image loader. • Each source code file you write must begin with a box comment identifying you, the course, the project, and the due date. • Each function must be preceded by a box comment that includes the name of the function and a brief abstract of what the function does. • It must include a makefile that builds your application on the SoC server when I enter the one-word command make on the Linux command line. There must be no build-time errors. Building must produce an executable application called a.out. • The makefile must not auto-run the application after it finishes building. • The makefile must include a clean entry.
Examination Session and Year Module Code 40606040StatisticalMethodsforMachineLearningIMachineLearningandStat Paper Number External Examiner Head of School Internal Examiner(s) Instructions to Candidates • • eWorddocumentprovided.PasteyourRcodeintoth • rfinalWorddocumentforCanvassubmission.Note:if youdonotmanagetoansweraquestionitem,providetheRcodeyouwouldhaveused,oracommentontheansweryouwouldexpectforthatquestion,asrelevant.3hours
CSC320 — Introduction to Visual Computing, Spring 2024 Assignment 4: The PatchMatch Algorithm Posted: Friday, November 15, 2024 Due: 11:59pm, Monday, Dec 2, 2024 (this is NOT the usual Thursday due date) Late policy: 15% marks deduction per 24hrs, submission not accepted if > 5 days late In this assignment you will implement the PatchMatch algorithm. This algorithm is described in a paper by Barnes et al. and will be discussed in tutorial this week. Your specific task is to complete the technique’s implementation in the starter code. The starter code is based on OpenCV and is supposed to be executed from the command line, not via a Kivy GUI. Figure 1: Example output from Barnes et al. The top left image is reconstructed entirely from image patches from the bottom left image. From left to right we can see the algorithm’s progression from random initialization to near-perfect reconstruction. Goals: The goals of the assignment are to (1) get you familiar with reading and understanding a research paper and (partially) implementing the technique it describes; (2) learn how to implement a more advanced algorithm for efficient image processing; and (3) understand how the inefficiencies you experienced in matching patches in the previous assignment can be overcome with a randomized technique. This algorithm has since become the workhorse of a number of image manipulation operations and was a key component of Adobe’s Content-Aware Fill feature. Important: As in the previous assignments, you are advised to start immediately by reading the paper (see below). The next step is to run the starter code, and compare it to the output of the reference implementation. Unlike Assignment 3 where you implemented a couple of small functions called by an already-implemented algorithmic main loop, here your task will be to implement the algorithm itself. This requires a much more detailed understanding of the algorithm as well as pitfalls that can afect correctness, efficiency, etc. Expect to spend a fair amount of time implementing after you’ve understood what you have to do. Testing your implementation: Unlike Assignment 3, it is possible to develop and run your code remotely from Teaching Lab machines. Starter code & the reference solution Use the following sequence of commands to unpack the starter code and to display its input argu- ments: cd ~ tar xvfz patchmatch.tar.gz rm patchmatch.tar.gz cd patchmatch/code python viscomp.py --help See Sections 2 and 5 for details on how the code is structured and for guidelines about how to navigate it. Its structure should be familiar by now as it is similar to previous assignments. This code was lasted tested and compiled on the teach .cs server using python=3 .11. In addition to the starter code, I am providing the output of the reference solution for a pair of test images, along with input parameters and timings. I am also providing a fully-featured reference solution in an encrypted format (with sourcedefender) to see how your own implementation should behave, and to make sure that your implementation produces the correct output. That being said, you should not expect your implementation to produce exactly the same output as the reference solution as tiny diferences in implementation might lead to slightly diferent results. This is not a concern, however, and the TAs will be looking at your code as well as its output to make sure what you are doing is reasonable. Please read the CHECKLIST .txt carefully. You will need to complete this form prior to submission, and this will be the first file markers look at when grading your assignment. The PatchMatch Algorithm (100 Marks) The technique is described in full detail in the following paper (available here): C. Barnes, E. Shechtman, A. Finkelstein and D. B. Goldman, “PatchMatch: A Randomized Algo- rithm for Structural Image Editing,” Proc. SIGGRAPH 2009. You should read Section 1 of the paper right away to get a general idea of the principles behind the method. The problem it tries to address should be familiar to you given that the algorithm you worked with in A3 relied on a “nearest-neighbor search” procedure for identifying similar patches for inpainting. In fact, Criminisi et al’s inpainting algorithm is cited in Section 2 of the paper as a motivation for PatchMatch. You should read Section 2 as well, mainly for context and background. The algorithm you are asked to implement is described in full in Section 3. The algorithm’s ini- tialization, described in Section 3.1, has already been implemented. Your task is to implement the algorithm’s basic iteration as described in Section 3.2 up to, but not including, paragraph Halting criteria. The starter code uses the terminology of Eq. (1) to make it easier for you to follow along. For those of you who are more theoretically minded and/or have an interest in computer science theory, it is worth reading Section 3.3. This section is not required for implementation but it does help explain why the algorithm works as well as it does. Sections 4 and 5 of the paper describe more advanced editing tools that use PatchMatch as a key component. They are not required for your implementation, and Section 4 in particular requires a fair amount of background that you currently don’t have. Read these sections if you are interested in finding out all the cool things that you can do with PatchMatch. Part 1. Programming Component (90 Marks) You need to complete to implement the two functions detailed below. A skeleton of both is included in file patchmatch/code/algorithm.py. This file is where your entire implementation will reside. In addition to these functions, you will need to copy a few lines of code from your A3 imple- mentation for image reading and writing that are not provided in the starter code. See the file patchmatch/code/README 1st .txt for details. Part 1.1. The propagation and random search() function (65 Marks) This function takes as input a source image, a target image, and a nearest-neighbor field f that assigns to each patch in the source the best-matching patch in the target. This field is initially quite poor, i.e., the target patch it assigns to each source patch is definitely not the most similar patch in the target. The goal of the function is to return a new nearest-neighbor field that improves these patch-to-patch correspondences. The function accepts a number of additional parameters that control the algorithm’s behavior. Details about them can be found in the starter code and in the paper itself. As explained in the paper, the algorithm involves two interleaved procedures, one called random search (50 marks) and the other called propagation (15 marks). You must implement both, within the same function. You are welcome to use helper functions in your implementation but this is not necessary (the reference implementation does not). The starter code provides two flags that allow you to disable propagation or random search in this function. As you develop your implementation, you can use these flags for debugging purposes, to isolate problems related to one or the other procedure. Part 1.2. The reconstruct source from target() function (15 Marks) This function re-creates the source image by copying pixels from the target image, as prescribed by the supplied nearest-neighbor field. If this field is of high quality, then the copied pixels will be almost identical to those of the source; if not, the reconstructed source image will contain artifacts. Thus, comparing the reconstructed source to the original source gives you an idea of how good a nearest-neighbor field is. Details of the function’s input and output parameters are in the starter code. Part 1.3. Efficiency considerations (10 Marks) You should pay attention to the efficiency of the code you write. Explicit loops cannot be completely avoided in propagation and random search() but their use should be kept to an absolute minimum. This is necessary to keep the running time of your code to a reasonable level: the input images you are supplied are quite large and it will take a very long time to process them if you use too many loops. No explicit loops are needed in reconstruct source from target(). Solutions that are no more than 50% slower than the reference implementation will receive full marks for efficiency. Less efficient implementations will have some of those points deducted depending on how much they deviate from this baseline. Part 2. Report and Experimental evaluation (10 Marks) Your task here is to put the PatchMatch to the test by conducting your own experiments. Try it on a variety of pairs of photos; on two adjacent frames of video; on “stereo” image pairs taken by capturing a photo of a static scene and then adjusting your viewpoint slightly (e.g., afew centimeters) to capture a second photo from a diferent point of view. Basically, run it on enough image pairs to understand when it works well and when it doesn’t. At the very least, you must show the results of running the algorithm on the supplied source/target image pairs, using command-line arguments like those specified in Section 3. Your report should highlight your implementation’s results (nearest neighbor field, reconstructed source, etc) and discuss how well your algorithm performs, and the conditions in which it doesn’t work well. The more solid evidence (i.e., results) you can include in your report PDF to back up your arguments and explanations, the more likely you will get full marks on your report. Place your report in file patchmatch/report/report.pdf. You may use any word processing tool to create it (Word, LATEX, Powerpoint, HTML, etc.) but the report you turn in must be in PDF format. What to turn in: You will be submitting the completed CHECKLIST form, your code, your written report and images. Use the following sequence of commands to pack your code and results: cd patchmatch tar cvfz assign4.tar.gz code results report/report.pdf CHECKLIST.txt Upload the gzipped tarfile to MarkUs. Important: After uploading the tarfile to MarkUS, download it from MarkUS, unpack it and verify that your code and all other assignment components are there! Students in previous instances of the course have accidentally submitted the starter tarfile instead of their own code and results, leading to major issues with marking—and wasting a lot of time for all involved. Because of this, there will be no accommodation for such errors this term: if you submit the starter code instead of your own code and/or submit no results, you will receive a zero on those parts of the assignment; if you discover this error a day later and want to re-submit, the standard late submission policy will apply. As stated on the course website, the course’s late policy includes a brief grace period to allow you to quickly resubmit your tarfile a few minutes after the submission deadline without any penalty if you discover omissions in your submitted tarfile.
Assignment Information Module Name: Project Management Module Code: 6068CEM Assignment Title: Individual Coursework Assignment Due: 05 December 2024 Assignment Credit: 10 Word Count (or equivalent): 1 500 words Assignment Type: Coursework Percentage Grade (Applied Core Assessment). You will be provided with an overall grade between 0% and 100%. Assignment Task You are required to undertake a comprehensive analysis of the project management processes, techniques, and tools undertaken in Coursework 1. Additionally, you must demonstrate effective teamwork skills throughout the process. Your analysis should encompass both theoretical concepts and practical applications. The assignment consists of two main components: Part 1: Critical Examination of Project Management Processes, Techniques, and Tools In this section, you are expected to critically examine a selection of project management processes, techniques, and tools. Your analysis should cover the entire project lifecycle – from initiation to closure. You must address the following points: 1. Critically analyse the project management processes (e.g., Waterfall, Agile, Scrum) your group undertook with a focus on their strengths, weaknesses, and suitability for specific project types. Was this the best methodology to follow or would one of the other methodologies have been more appropriate, looking back. 2. Examine at least three project management techniques (e.g., Work Breakdown Structure, Critical Path Method, Risk Assessment) and evaluate their effectiveness in ensuring the project’s success. 3. Evaluate at least two project management tools (e.g., Gantt charts, Microsoft Planner, Project Management Software) and discuss their contribution to project planning, tracking, and communication. Part 2: Teamwork Reflection and Planning Working effectively as a team is essential in project management. In this section, you are required to reflect on your teamwork experience throughout Coursework 1. Address the following points: 1. Describe the roles and responsibilities assigned to each team member and how these were determined. Show evidence as to how you contributed to the project planning process, and how you initiated in guiding discussions and decisions. 2. Reflect on the team's dynamics, communication strategies, and conflict resolution approaches. 3. Analyse how self-direction, initiative, and planning skills were applied by team members to ensure efficient project management. 4. Discuss any challenges faced during the teamwork process and how they were overcome. Submission Guidelines: • Your assignment should be submitted as a single PDF document. • Part 1 and Part 2 of the assignment should be clearly labeled. The assignment should not exceed 1 500 words (excluding references). This assessment is in the Amber category for use of AI (AI assisted). This means that you are allowed to use AI for the purpose of planning and management of the essay, summarising and consolidating notes and sources as part of background research, translating small sections of your written or recorded work into another language, presenting data in formats such as graphs, charts, tables, and other similar activities. In that case, you should state, in the end of your work, which AI tool was used, and how it has been used in this assignment Marking and Feedback How will my assignment be marked? Your assignment will be marked by the module team. How will I receive my grades and feedback? Provisional marks will be released once internally moderated. Feedback will be provided by the module team alongside grades release. Your provisional marks and feedback should be available within 2 weeks (10 working days) What will I be marked against? Details of the marking criteria for this task can be found at the bottom of this assignment brief. Assessed Module Learning Outcomes The Learning Outcomes for this module align to the marking criteria which can be found at the end of this brief. Ensure you understand the marking criteria to ensure successful achievement of the assessment task. The following module learning outcomes are assessed in this task: 1. Demonstrate an understanding of identification of change in how organisations work, and to facilitate that change using projects through its project life cycle. 2. Critically examine different project management processes, techniques and tools for effective management of a project through its life cycle. 3. Apply relevant knowledge, skills and creativity for appropriate governance in project management. 4. Work effectively as part of a team applying self-direction, initiative, team working and planning skills in the context of group project management.
CS 0447 Computer Organization and Assembly Language Midterm Project - Connect 4 Introduction In this project, you will implement a 2 player game in MIPS assembly: Connect 4 aka Four-in-line. The game consists aboard representing the play area. Two players face each other and drop tokens, one at a time, until one of them manages to place four in line! Start early The deadline will approach fast! Life happens, sickness happens, so if you start early, you can minimize the impact. Do a little bit everyday! 1 hour everyday! 30 minutes everyday! SOMETHING! Game mechanic The game works like this: 1. Initially, the players have a blank board 0 1 2 3 4 5 6 |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| 2. Player 1 takes the first turn 0 1 2 3 4 5 6 |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| 3. When a valid number is input, a token is placed in that column, at the first (lowest) free position. 0 1 2 3 4 5 6 |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|*|_|_|_|_|_| 4. Next, it’s player 2 turn. Player 2, it's your turn. Select a column to play. Must be between 0 and 6 5. The game ends when one of the players manages to place 4 tokens in a horizontal, vertical, or diagonal line. 0 1 2 3 4 5 6 |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |*|_|_|_|_|_|_| |_|*|_|_|_|_|_| |_|_|*|_|_|_|_| |_|_|_|*|_|_|_| Congratulations player 1. You won! 0 1 2 3 4 5 6 |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|*|_|_|_| |_|_|*|_|_|_|_| |_|*|_|_|_|_|_| |*|_|_|_|_|_|_| Congratulations player 1. You won! 0 1 2 3 4 5 6 |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |_|+|+|+|+|_|_| |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| Congratulations player 2. You won! 0 1 2 3 4 5 6 |_|_|_|_|_|_|_| |_|+|_|_|_|_|_| |_|+|_|_|_|_|_| |_|+|_|_|_|_|_| |_|+|_|_|_|_|_| |_|_|_|_|_|_|_| Congratulations player 2. You won! Your assignment Plan Plan your implementation which includes data structures you are planning to use, user inputs that maybe invalid and you need to account for, etc. 1. Think of which functions you will need to implement, and what they will do. 1) Start from the main function and split your program into multiple steps. 2) This plan is not going to beenforced, but it should be thought through. 2. Think of possible invalid user inputs, and how they will impact the program negatively. 1) Board bounds. 2) Filling a column to the top. Implement Implement the MIPS assembly code that executes the game described above. Your program will manage all interactions with the user and the board: 1. It begins by displaying a welcome message and an explanation of what the user should do. How is the game played? 2. Print the empty board. 3. Then, the game begins, and your program will: 1) Ask player 1 to play: Ask and validate user input (MARS will crash if the user gives no input or a letter, this is fine!) Don’tallow the user to select anon-existing tile. Don’tallow the user to select a full column. “Drop” the token into the board at the requested column. Check for a winning condition. 2) Ask player 2 to play: Ask and validate user input (MARS will crash if the user gives no input or a letter, this is fine!) Don’tallow the user to select anon-existing tile! Don’tallow the user to select a full column! “Drop” the token into the board at the requested column. Check for a winning condition. 3) Repeat until one of the players winsor the board is full. 4. In the end, print a message letting the winning player know the game has ended. The welcome message Bear in mind that you do your own thing, as long as it fits the project! So use the welcome message to explain to the user exactly how it should play the game. Explain the rules,and how the player can score points. User input Your program needs to ask the user in which column he/she wants to drop a token. If the user inputs an invalid value, you inform the user of that and ask again. You must validate the user input! The exact way you implement this is up to you. You must ask the user to input something to select the column. Representing the board Feel free to implement all data structures that you need. However, it is suggested you’d better use matrices. You can implement your board as a matrix of words to keep the status of the game. Here is one suggestion: board: .word 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Note: The board refers only to the contents of the board, not the frame around the tiles! The frame. is always the same, it doesn’t need to be stored anywhere! If you include the frame, it’ll make your life harder! For the status of each tile, it is suggested to create a matrix of 0s (empty) 1s (player 1 tokens), and 2s (player 2 tokens). When you want to print each tile, you simply need to check the status matrix to know if the tile was revealed. if(board[i][j] == 0) { print('_') } else if(board[i][j] == 1) { print('*') } else { print('+') } Check the example below: The board - this is what you draw: 0 1 2 3 4 5 6 |_|_|_|_|_|_|_| |_|_|_|_|_|_|_| |*|_|_|_|_|_|_| |*|*|_|_|_|_|_| |+|*|*|_|_|_|_| |+|+|+|*|+|_|_| The matrix representing the board - contains 1sfor player 1, and 2 player 2: 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 2, 1, 1, 0, 0, 0, 0, 2, 2, 2, 1, 2, 0, 0
Production Analysis Papers—20 points each Instructions—Read Carefully! Write an analysis that responds to the prompts below with your original analysis and insight about the production. We want to know what you thought about what you experienced and saw on the stage, not what you liked or didn’t like. Your analysis must express a point of view about the production and must be at least 2 full pages but no more than 3 pages long. Assume your reader viewed the production. Provide context for your discussion and analysis without retelling the entire plot of the play, if a brief synopsis of the story might help you set up your arguments. Pay attention to the production’s artistic credits. We recommend that you take notes while watching the show so that you can properly reference artists’ names in your paper. Failure to credit artists properly will result in a loss of points. The library link may also contain a note from the director or dramaturg, information that you might find helpful on this assignment. I. Prompts • A Design Element. Examine the use of ONE design element in the production. (A design element includes scenery, costumes, props, lighting, sound, or media design). Properly credit the artist whose work you are critiquing. How did these specific design choices create meaning and contribute to the world of the play and the storytelling? Use specific moments from the production to support your aesthetic interpretation and analysis. How does this reinforced the setting of the play or the status, class, gender, or relationship of a character; • Choice of a Staging Moment or a Performance or another Point of Interest. You are free to choose another aspect of the production to analyze; for example, a specific director’s choice, an actor’s performance, a piece of choreography, or something else that captured your interest. How did these choices create meaning? What does this aspect of the production suggest about the production as a whole? • Issues of Diversity: Examine the politics of the play by choosing an issue of diversity (such as race, ethnicity, gender, sexuality, class, religion, disability) and evaluating its depiction. How does the play’s depiction reinforce or challenge conventional (traditional) understandings of the issue under examination? What does this particular issue under examination suggest about the play as a whole? Do not let the above prompts limit your response; rather, use them to inspire, expand, and deepen your thinking about the play and its production. II. Format Follow MLA style. guidelines: MLA Sample Paper. You paper must contain your name, the course number, your recitation instructor’s name, and the date, formatted per the example provided. MLA guidelines can be found at (http://owl.english.purdue.edu/owl/resource/747/01/). • Give your paper a creative title. • Italicize play titles. They do not belong in “quotation marks.” • When talking about a production that you have seen, use the past tense. • You should name each artist whose work you are discussing. • This is a scholarly paper. Use a formal voice. Avoid slang. Writing should be polished— your grade for presentation includes grammar, syntax, and spelling. • Proofread. Your writing should be free of typos, misspellings, and other mistakes. • If you use any sources, be sure to include a citation. This applies to the dramaturg’s program note. Here is the general format for citing a show’s program: Program Notes. Name of Play by Playwright. Producing Company. Location. Date you saw the show. SUBMISSION DIRECTIONS • Upload your production analysis paper and include the image file of your signed ticket stub or program at the end of your paper (in Word or PDF format only) to the correct Carmen Production Analysis assignment. We reserve the right to refuse late or emailed papers. • Failure to turn in your paper in a readable format will result in a loss of points. NOTE: This is an individual, not a collaborative assignment. The essay you turn in should be your own work. In this course, we use Turnitin originality check on all written work. Avoid plagiarism! III. Other Tips Be honest! You do not have to express any particular opinion just to try to please your instructor. But remember, you must support your opinions using specific examples and thoughtful analysis about the production. It is not enough just to have an opinion, positive or negative. You must tell your reader why. Talk about what really interested or excited you about the production! Grading Rubric: 5 pts A Design Element 5 pts Point of Interest 5pts Issues of Diversity 3pts Presentation (organization, grammar, etc.) 2pts Proof of Attendance and proper credit to production crew and actors IV. Grading Rubric 20 points 5 (Paper goes above and beyond the assignment) Skilled 3 (Paper fully meets the parameters of the assignment) Fairly Competent 1 (Paper does not address the assignment) No Marks Design Element The writer choses an exceedingly compelling aspect of the production to examine. He/She demonstrates keen critical insight about the significance of this aspect of production. The argument clearly and compellingly articulates how this point of interest informs the play as a whole. The writer choses an interesting aspect of the production to examine. He/She demonstrates critical insight about the significance of this aspect of the production. The argument clearly articulates how this point of interest informs the play as a whole. The writer choses an important aspect of the production to examine. He/She demonstrates insight about the significance of this aspect of the production. The argument articulates how this point of interest informs the play as a whole. This writer’s choice betrays a lack of interest in the production. He/She fails to provide insight about the significance of this aspect of the production. Little effort is made to connect this aspect of the production back to the play as a whole. This writer choses an insignificant aspect of the production to examine. His/Her ideas are unfocused, underdeveloped, and/or unconvincing. Argument is repetitive and provides no critical insight. This paper failed to fulfill the requirement. Issue of Diversity or and
ITS63304 Object-Oriented Programming Group Assignment (30%) September 2024 Module Learning Outcome (MLO) MLO 2: Demonstrate capability to interact positively within a peer group, consider other viewpoints, and foster stable and harmonious relationships in solving computational problems related to object-oriented programming language. Part 1: GROUP PROJECT Project Theme The 2030 Agenda for Sustainable Development, adopted by all United Nations members in 2015, created 17 world Sustainable Development Goals (SDGs). They were created with the aim of "peace and prosperity for people and the planet", while tackling climate change and working to preserve oceans and forests. The SDGs highlight the connections between the environmental, social and economic aspects of sustainable development. More information about SDGs can be found herehttps://sdgs.un.org/goals. SDG 13: Sustainable Development Goal 13 (SDG 13 or Goal 13) is about climate action and is one of 17 SDGs established by the United Nations in 2015. The official mission statement of this goal is to "Take urgent action to combat climate change and its impacts". There are five main targets of this SDG 13 in total, all of which cover a wide range of issues surrounding climate action. For more information on SDG 13, go to the following link: https://sdgs.un.org/goals/goal13#targets_and_indicators • Target 13.1: Strengthen resilience and adaptive capacity to climate-related hazards and natural disasters in all countries. • Target 13.2: Integrate climate change measures into national policies, strategies and planning. • Target 13.3: Improve education, awareness-raising and human and institutional capacity on climate change mitigation, adaptation, impact reduction and early warning. • Target 13.a: Implement the commitment undertaken by developed-country parties to the United Nations Framework Convention on Climate Change to a goal of mobilizing jointly $100 billion annually by 2020 from all sources to address the needs of developing countries in the context of meaningful mitigation actions and transparency on implementation and fully operationalize the Green Climate Fund through its capitalization as soon as possible. • Target 13.b: Promote mechanisms for raising capacity for effective climate change-related planning and management in least developed countries and small island developing States, including focusing on women, youth and local and marginalized communities. Project Details Project Title must be related to SDG 13, which is about Climate Action. Project Description: Various entities including governments, startups, and organizations worldwide are actively engaged in developing applications related to achieving the Sustainable Development Goal 13 (SDG 13). To explore examples and draw inspiration, you can visit the provided link. It showcases different applications aligned with various targets of SDG 13. https://www.valuer.ai/blog/identifying-new-business-models-and-technologies-within-sdg-13 In this group project, the group members will work on creating a computer program or application to support one of the targets outlined in Sustainable Development Goal 13 (as mentioned above). You have the freedom to choose whether to develop a console application or a more user-friendly graphical user interface (GUI). To ensure the success of your project, you are required to fulfil the following objectives. Develop at least FIVE (5) key features that contribute to achieving your chosen SDG 13 target. Design the program to cater to two types of users: Government (Admin) and Public (Normal User). Empower Government users with capabilities such as editing, deleting, and updating information, while restricting Public users to viewing and sharing information only. Use at least THREE (3) classes, each containing three data fields and methods, to create a scalable and maintainable program. Enable easy information retrieval through keyboard-based search functionality. Implement at least ONE (1) switch statement and TWO (2) conditional statements in your code. Incorporate at least ONE (1) for loop and ONE (1) do-while loop statement for iterative processes. Include an Array or ArrayList as necessary to fulfil program requirements. Utilize at least ONE (1) access modifier to differentiate between Public and Admin users for security and access control. This group project offers ample room for creativity, so don't hesitate to think innovatively and ambitiously. The sky's not the limit, so get creative and think big! Good luck! Project Deliverables 1. Program/Application in Java language a) Project Folder (.zip) b) Executable JAR file 2. Source code in MS Word or PDF file, with the above criteria highlighted in document with yellow colour. 3. Documentation/Report (to be submitted in PDF format) a) Cover page b) Marking rubric c) Role and responsibility of each group member d) Application description and rational, including key features e) User Interface (UI): Describe how to use the application along with the screenshots. f) Lessons learned g) References (IEEE referencing style) Timeline Submission via MyTimes: Week 12 [December 13, 2024, 11:59 PM (midnight)] PART 2: GROUP PRESENTATION Description Following the completion of your group project (part 1), you are required to deliver a 10-minutes presentation detailing your work and a 5-minute demonstration of your application. Furthermore, emphasize the features that align with SDG13 and share significant insights gained from this assignment. Deliverables 1. Presentation Slides a) 10 minutes slides presentation b) 5 minutes demonstration of developed program/application c) It is mandatory for each group member to present Timeline Presentation: Week 13 [Practical class] Marking Rubric - Part 1: Group Project (20%) Criteria Excellent (8 - 10) Good (6 - 7) Average (4-5) Poor (0-3) Your Score Description and rationale A detailed description and outstanding support for the SDG13 aim Sufficient explanation and backing SDG13 target Average description and average support SDG13 target Poor description and poor support SDG13 target User Interface Extremely attractive and user-friendly Moderate in terms of both aesthetics and ease of use Average visual appeal and user friendliness Not appealing or user- friendly Source code Extremely rational, organized, and satisfying every criterion Acceptable in that it is logically sound, well- structured, and generally satisfying. The average logical organization that meets given criteria Not logical, poor organization and meet few criteria Report Extensive and thorough coverage Detailed and well-written. There is little detail and the material is average. Not detailed and not complete Lesson learned The acquired knowledge is extensive and exhaustive in every respect. The lesson is good and covers most of what you need to know. The lesson learned is adequate and covers some ground. The lesson learned is poor and incomplete Overall Comprehensive and complete in all aspects Good and comprehensi ve. Average and cover some ground Poor and incomplete Reference 10and more recent references 6-7 recent references 4-5 recent references Less than 4 recent references TOTAL /70 NOTE: Total marks will be adjusted to a maximum of 20% allocated for this assignment.
CO2 Dissolution in Water ENGF0003 Project 24-25 Guidelines: • Type your project in Word or LaTeX. Follow UCL Accessibility Guidelines to format your document. Include a table of contents, page numbers, and use built-in styles (Heading 1, Heading 2) to structure your document. • All figures and tables must be numbered and contain informative captions. All the main equations throughout your work must be numbered and typed appropriately. • Submit a single PDF document. Do not write down your name, student number, or any information that might help identify you in any part of the project. Do not copy and paste the coursework brief into your submission – Rewrite information where necessary for the sake of your argument. This project counts towards 30% of your final ENGF0003 grade. Introduction In your ENGF0003 coursework you have taken a data-driven approach to studying ocean acidification via summarising, describing, visualising and generalising data into mathematical models. In this project, you will work with two theoretical models of the dissolution of CO2 in water, implement them, and discover how theory compliments real-world data in engineering mathematics. 1.1 Phase Equilibrium During your ENGF0003 journey,you have learned about stationary points, which are those where the derivative of a function is zero and the function does not change with time. The mathematical model of CO2 equilibrium in water is similar, where we assume that variables such as temperature and pressure do not change with time, or change so slow that we can say that their time derivative is sufficiently close to zero. To create a mathematical model of the solubility of carbon dioxide in seawater, we start from a law of physics known as Henry’s Law. Henry’s Law states that the amount of gas dissolved in a liquid at constant temperature increases as the pressure of the gas above the surface of the liquid is raised. In this project, two main simplifications will be made to model surface ocean water, the real-world system we wish to model: i. We will assume that surface seawater behaves like pure water. This assumption is made because there are well-documented empirical relations for CO2 dissolution in pure water. ii. Although the atmosphere is composed of H2O vapour, N2, O2 , Ar, CO2 , Ne, He, CH4 , Kr, H2 , NO, Xe, O3 , I2 , CO and NH3 , we will focus on modelling a system that is formed only of H2O and CO2 . 1.1.1 Henry’s Law Figure 1 shows a closed container with a gas and a liquid phase, where the gas phase is represented in orange and the liquid phase is represented in blue. Suppose that this system represents a mixture of water and CO2 both at temperature T [K] and total pressure P [Pa]. • In the gas phase, there is a mixture of water vapour and CO2 gas. • In the liquid phase, there is a mixture of liquid water and CO2 particles dissolved into the water. Figure 1. Closed container where a mixture is in phase equilibrium. Henry’s law states that the following relations are valid in equilibrium (Carey, 1988; Carroll, Slupsky and Mather, 1991): x1 pv, 1 = y1 φ1 p, (1.1) x2 H21 = y2 φ2 p. (1.2) In equations 1.1 and 1.2 xi is the molar fraction of the component with index i in the liquid phase, and yi is the molar fraction of the same component in the vapour phase. Water is the solvent of the system, being represented by an index i = 1 and carbon dioxide is the solute represented by an index i = 2. • We will model the gas phase in terms of the mass fraction of CO2 in gas formy2 and the mass fraction of water vapour y1 . • Likewise, we model the mass fraction of water in liquid phase as x1 and the mass fraction of CO2 dissolved in water as x2 . This results in a system of equations such as Finally, Pv,1 is the vapour pressure of water described empirically by Equation 3 and H21 is the temperature-dependent Henry’s constant described empirically by Equation 4. The coefficients φ1 and φ2 are the fugacity ratio in vapour and liquid phases, denoted as φ1 = φl1/φv1 and φ2 = φ . The fugacity ratios allow us to approximate non-ideal gases more accurately while using idealised equations. Appendix A summarises the approach of Peng and Robinson (1976) in obtaining these quantities. 1.1.2 Vapour Pressure of Water The vapour pressure of water is represented as Pv,1 . This quantity can be computed with the empirical approximation given by the International Association for the Properties of Water and Steam IAPWS (Wagner and Pruß, 2002) as where x = 1 − Tr , and Tr = T/Tc is a non-dimensional temperature variable and values for αi in Eq. 3 can be found in Table 1. In Eq. 3, Tc = 647.096 [K] is the critical temperature of water, and Pc = 22.064 [MPa] is the critical pressure of water. Table 1. Coefficients αi for Equation 3 1.1.3 Henry’s Constant H21 is the Henry constant for the system H2O + CO2 . An empirical temperature- dependent expression for H21 (T) is given by (Carroll, Slupsky and Mather, 1991) as where the coefficients ℎ i can be found in Table 2. Table 2. Coefficients ℎ i for Equation 4. 1.2 Reaction Kinetics When considering engineering mathematical models, we need to pay close attention to time scales. This means that some parts of our problem might be better represented as stationary phenomena, like in 1.1, but other parts of happen faster, and it is important to understand how they develop in time. We will now explore a time-dependent model of the solubility of CO2 in water. In specific, we will focus on mathematical models of the cascade of chemical reactions triggered when CO2 is dissolved in water. Carbon dioxide does not stay inert when it dissolves in water. It undergoes a chain of reversible chemical reactions producing carbonic acid (H2 CO3 ), bicarbonate ion (HCO3(−)), carbonate ion (CO3(2) − ) and hydrogen ion (H+ ). This chain is expressed through Reactions 1 – 4 as: The kinetics of these reactions can be modelled by a system of nonlinear ordinary differential equations given by: In these equations, t is time in seconds and a square bracket around the name of a chemical species indicates its concentration. ki represents rate constants for the ith reaction. Since these reactions are reversible, ki denotes a forward rate and k −i a reverse rate. The rate constants ki and k −i are given in inverse second [s-1]. The set of equations 5.1 – 5.6 can be interpreted as follows: • Eq. 5.1 models R1 and accounts for the time it takes for carbon dioxide gas, represented as CO2 (g) to dissolve into water and become aqueous CO2 (aq). • Eq. 5.2 models both R1 and R2. It accounts for: o Production of CO2 (aq) by the dissolution of CO2 (g) at rate k1 , o Loss of CO2 (aq) by reverse reactions back to CO2 (g) with rate k −1 , o Loss/production of CO2 (aq) by forward/reverse reactions to/from H2 CO3 . A similar logic can be used for Eqs. 5.3 – 5.6. The concentrations in Eqs. 5.1 – 5.6 are given in molarity [M] which is equivalent to one mol of the chemical species per litre of water. The best case-study to understand the dynamics of CO2 dissolution in water is to imagine a system where all species are in equilibrium at t = 0. Table 3 shows the equilibrium molarities for all species in this reaction. Table 3. Forward and reverse rate constants. If at t = 0 we inject gaseous carbon dioxide at concentration [CO2 (g)]0 = 0.065 [M], this will trigger a cascade of reactions that will cause all other species to react and then reach equilibrium within 100 seconds. The equilibrium conditions in [M] for all species which should be used as initial conditions in solving this system of differential equations at 25˚C and 1 atm pressure are: [CO2 (aq)]0 = 5.41 × 10 −4 [H2 CO3 ]0 = 1.64 × 10 −6 [HCO3(−)]0 = 3.28 × 10 −4 [CO3(2) − ]0 = 1.97 × 10 −8 [H+]0 = 1 × 10 −6 Implementation tips: • Use the self-paced course Solving Ordinary Differential Equations with MATLAB to learn how to write solutions to systems of non-linear ODEs in MATLAB. • Since the scales of the rate parameters in Table 3 vary widely from 10 −2 to 1010 , the best MATLAB solvers for this system are stiff solvers such as ode15s or ode23s. • You are also recommended to set an Absolute Tolerance ‘AbsTol’ of 10 −12 and Relative Tolerance ‘RelTol’ of 10 −6 for this problem because the initial conditions are as low as 10 −8 and might get smaller. [40 marks] Task 1 Your task is to study, describe, and operationalise the mathematical model in section 1.1 of this document. [3 PAGE MAXIMUM] A. Mathematical Task: Express Eqs. 1.1, 1.2, 2.1, and 2.2 as a linear system in terms of matrix-vector multiplication and matrix inversion. Find the conditions where the model represented by Eqs. 1.1, 1.2, 2.1, and 2.2 cannot be solved. Discuss the implications of this condition in a real- life system. B. Communication Task: Create a schematic showing how all equations in Appendix A and pages 4 through to 8 in this brief are connected. Your summary should communicate clearly how they can be solved in a logical order. Use this schematic to describe how you will structure MATLAB code to solve this problem. C. Modelling Task: Propose an extended version of the model in Equations 1.1 through to 2.2 that also includes gases other than CO2 . Describe the specific quantities that you would need before implementing this model computationally. [30 marks] Task 2 Your task is to implement and validate the mathematical model in 1.1 in MATLAB. [2 PAGE MAXIMUM] A. Coding Task: Use MATLAB to calculate and plot H21 , Pv,1 , φ1 and φ2 when T ∈ [10, 80] ˚C and P = 101.325 kPa. Use the subplot function to produce a 2x2 grid of figures. B. Validation Task: Use MATLAB to calculate the equilibrium concentration of CO2 in water x2 (T, P) for T ∈ [10, 80] ˚C and four distinct pressures such that P ∈ {50, 101.325, 200} kPa. Contrast and compare your results with those of Table 3 in (Carroll, Slupsky and Mather, 1991) . C. Modelling Task: Attached to this brief is a comprehensive dataset of the Great Barrier Reef. Use data on pressure, temperature and air concentration of CO2 to model the equilibrium concentration of CO2 in water over time. [30 marks] Task 3 Your task is to study, describe, and operationalise the mathematical model in section 1.2 of this document. [2 PAGE MAXIMUM] A. Coding Task: Use MATLAB to solve the system of ordinary differential equations shown in Eqs. 5.1 – 5.6 using the reaction rates given in Table 3 and the initial conditions outlined in page 11. Plot your time-dependent solutions for all chemical species in a single figure. B. Analysis Task: Contrast and compare the speed at which each of the reactions in page 9 takes place. Use the constants in Table 3 and your numerical solution of the system 5.1 – 5.6 to support your argument. Estimate the time it takes for the full system to reach steady-state equilibrium. C. Summary Task: Contrast and compare the Phase Equilibrium and Reaction Kinetics models in this project. Explain what phenomena they represent and how these are connected. Discuss their fundamental assumptions and how these assumptions shape and limit their possibilities.
Advanced Economic Theory I ECON 629 Homework 7 Fall 2024 Due: Monday, December 2, 2024 1. Exercise 11.3 Consider a pure exchange economy with two consumers and two commodi- ties. Suppose that free disposal is allowed in this economy, and the consumption set for each consumer is R . Consumer 1 has a utility function given by: u1 (x1 , y1 ) = 200 - (10 - x1 )2 - (10 - y1 )2 . Consumer 2 has lexicographical preferences, with x2 primary andy2 secondary. The two consumers have the same endowments given by: wi = (10, 10), i = 1, 2. (a) Draw an Edgeworth Box diagram for this economy. (b) Find all Pareto optimal allocations. Give the reason if there are none. (c) Find competitive equilibria in this economy, if any. If there are none, explain why. 2. Exercise 11.5 (Harmful Goods) Consider a pure exchange economy with two consumers. The utility functions for consumers 1 and 2 are given by: ui(xi(1), xi(2)) = xi(1)(4 - xi(2)), where x2 is a “harmful product," and its price should be negative. The consumption set is [0, 5] × [0, 3] ≤ R . The endowments are w1 = (1, 3) and w2 = (3, 1), respectively. (a) Prove that an allocation x is Pareto efficient if and only if x 1(1) + x1(2) = 4. (b) Solve for the competitive equilibrium. (c) Draw the contract curve and the offer curves in the Edgeworth Box. 3. Exercise 11.9 Consider a pure exchange economy with n consumers and two commodi- ties: the consumption set for each consumer is R , and each consumer i’s utility function is ui(xi) = max{xi(1), xi(2)}. (a) If there are only two consumers 1 and 2, with endowments w1 = (1, 1) and w2 = (1, 1), respectively, what are the Walrasian equilibria? (b) In the previous question, what is the set of Pareto efficient allocations? What is the set of weakly Pareto efficient allocations? (c) Now, if there are three consumers, with endowments w1 = (1, 1), w2 = (1, 1), and w3 = (1, 1), respectively, what is the set of Pareto efficient allocations? 4. Exercise 11.13 (The First Welfare Theorem with Convex Preferences) Consider a pure exchange economy. Suppose that ≥i is convex, and (x, p) ∈ RL × R Wal- rasian equilibrium. Show that the Walrasian equilibrium allocation x is Pareto optimal. 5. Exercise 11.21 (Indivisible Goods) Consider an economy with two commodities x and y and an arbitrary number of agents. Assume that preferences are strictly increasing in each good. Consider two cases: (a) x is perfectly divisible, but y comes only in integer units. (b) Both x andy come only in integer units. Answer the following questions: (a) Could we have a competitive equilibrium allocation that is not Pareto optimal? (An- swer separately for case (1) and case (2)). (b) If your answer is “yes," provide an example with a competitive equilibrium allo- cation that is not Pareto optimal. (Justify both of the following facts: (i) that the allocation in your example is a competitive equilibrium allocation; and (ii) that it is not Pareto optimal.) (c) If your answer is “no," provide a proof to justify it. 6. Exercise 11.28 Consider a production economy that can produce food (product 1) and electricity (product 2) with labor (L) and capital (K). The production functions of the two products are: y1 = √L1K1, y2 = min{L2, K2 }. The economy has a representative economic agent whose endowments are 1 unit of labor and 1 unit of capital. The utility function is u(x1 , x2 ) = √x1 x2 . (a) Find competitive equilibrium. (b) Find the Pareto efficient allocation of the economy.
ECON 134A CASE STUDY 2 - Upland Restaurant Case Valuing Mutually Exclusive Capital Projects Instructions This case study will ask you to evaluate mutually exclusive investment opportunities by putting yourself in the shoes of two recent graduates turned restaurant owners and investors. After reading the background scenario carefully, you will have to answer a series of preparatory questions and write an executive summary. The questions are meant to guide you to prepare the executive summary. You are free (and encouraged) to work on additional analyses to make your executive summary stronger. You will have to upload a single pdf document containing 1) the one-page executive summary, 2) answers to the preparatory questions, and 3) all the tables and appendices backing up your analysis, including any additional work you think is helpful. Note that points will be awarded for presentation. This includes the ease with which one can understand your work, whether your final document is nicely formatted, whether tables and exhibits are clear and well- documented, etc. You may want to think of your report as something you would feel comfortable handing in to your boss if you were a financial analyst. You may work in groups of up to four students, in which case you only needto submit one report. Please make sure to include the names and student IDs of all students in the file. Working in groups is highly recommended. Background After graduating from UCI 2 years ago, you and three friends decided to start Upland Restaurant. After searching for several months for a location in Irvine, you decided togo a different route and buy 5 acres of land including an old restaurant and a small building, formerly used for offices, at the edge of town. After renovating the old restaurant, you were able to open and grow sales over the past 2 years. However, lacking the initial capital, you never did anything with the other smaller building. Now that you have saved up some cash, you and your friends feel like you can generate some extra income from the existing building. To that end, you and your team have paid $20,000 to a consulting firm for a forecast of the future revenues and costs associated with the different options you are considering. The exhibits given below are the outcome of the consulting firm’sresearch. Your first option is to enter a leasing agreement with a former Anteater who runs an event planning company called Diamond Events. After describing the location and space to her, she is interested in renting it out to host a variety of events. To make this possible, you will have to renovate the space first, which will take time and money. Additionally, if Diamond Events were to lease the space, you and your team expect there to bean increase in repairs, maintenance, and utilities as well as a slight decrease in restaurant sales from an overall decrease in ambience from the additional event goers (loud partyers, congested parking lot, etc.). Diamond Events is willing to sign a 4-year lease with an annual rent of $84,000 in the first year, growing at 5% thereafter. The team’s additional assumptions are given below in Exhibit 1; where the renovation cost is a one-time capital expenditure and the increase in repairs, maintenance, and utilities, is an annual cost. Note: All operating income are taxable, therefore operating expenses reduce the taxable income, while capital expenditures such as renovation costs and equipment costs are not tax deductible. Exhibit 1: Leasing to Diamond Events Assumptions Project life 4 years Renovation cost 90,000 USD Tax rate 21% Cost of capital 13.00% Rental growth rate 5% Increase in repairs, maintenance, and utilities 15,000 USD Decrease in restaurant sales 8.0% Below in Exhibit 2 are the original projections of net restaurant sales for the next 6 years, were you not to undertake any new project with the small building. Exhibit 2: Baseline Projection of Restaurant Sales Year 1 2 3 4 5 6 Net restaurant sales $ 225,000 $ 240,000 $ 260,000 $ 285,000 $ 300,000 $ 310,000 The other option the team is considering is starting a small craft brewery in the space. While the renovations would be much less expensive, in order to start the brewery, your team would need to buy and install the required equipment. Additionally, there would be other increases in costs to consider. A major benefit, however, is that the brewery would serve as a complement to your existing restaurant business. Your team feels that offering your own unique craft beers will lead to more food sales than would otherwise occur without them. You project that your craft beverage sales will start at $85,000 in year 1 and grow at 7.5% annually after that. Additional assumptions are found below in Exhibit 3; where the renovation and equipment costs are one- time capital expenditures and the increase in repairs, maintenance, and utilities, is an annual cost. Note: The equipment is assumed not to depreciate overtime. Exhibit 3: Building Craft Brewery Assumptions Project life 6 years Renovation cost 25,000 USD Equipment cost 150,000 USD Tax rate 21% Cost of capital 13.00% Sales growth rate 7.5% Brewing ingredient costs 40% of sales Other operating expense 12% of sales Increase in repairs, maintenance, and utilities 10,000 USD Increase in restaurant sales 15.0% Lastly, your team needs to consider what you can do with the craft brewery after the project life is over. After brainstorming, you feel that there are 2 possible outcomes after the 6 years are up. The first outcome, outcome A, is that the project does not go as planned, in which case you will have no other option than simply ceasing operations. The second outcome, outcome B, is that the project goes well, you develop a good menu of craft beverages and a steady customer base. In this case, you believe that you will have two options after Year 6. The first option, option B.1., is for an outside investor to purchase the craft brewery portion of your business. The second option, option B.2., is to simply continue operations, which you will value as a perpetuity. The necessary assumptions are given below in Exhibit 4. Exhibit 4: Terminal Options Outcome A - Cease Operations Outcome B, Option 1 - Sell to Investor Outcome B, Option 2 - Continue Operations Project ends after 6th year. No future cash flows. Sell craft brewery operations to an outside investor for an estimated $600,000 at the end of the 6th year. Capital gains tax rate is 15%. Continue operations indefinitely after the 6th year. Net operating profits after taxes is expected to grow 1.5% annually. For outcome B, your team is unsure about what the best option is and is hoping you can help them determine which one would add the most value to the business. Additionally, from your time in business school, you are aware that valuation techniques are very sensitive to the assumptions that are made. While you and your team worked very hard on projecting sales, growth rates, etc., you understand that these are just expectations and that actual values can be higher or lower, impacting the attractiveness of the options. Therefore, it will be important to conduct sensitive analyses on some of the key parameters. Preparatory Questions 1) Lease option a. What are the relevant costs and benefits of leasing the additional space to Diamond Events? b. Are any costs or benefits irrelevant? c. What is the NPV of leasing the additional space to Diamond Events? d. What is the IRR? e. Do the NPV and IRR decision making rules agree? f. Sensitivity analysis i. Construct a cost of capital sensitivity table for all valuation types with costs of capital ranging from 11% to 15% in increments of 0.5%. That is fill in the following chart: LEASE OPTION COST OF CAPITAL SENSITIVTY Cost of Capital 11.0% 11.5% 12.0% 12.5% 13.0% 13.5% 14.0% 14.5% 15.0% NPV ii. Construct 3x3 NPV and IRR Sensitivity Analyses reflecting the following information LEASE OPTION NPV SENSITIVITY Increase in repairs, maintenance, and utilities 7,500 15,000 22,500 Decrease in restaurant sales 4% 8% 12% 2) Build option - Outcome A: Cease operations a. What are the relevant costs and benefits of starting the brewery? b. Are any costs or benefits irrelevant? c. What is the NPV of starting the brewery? d. What is the IRR? e. Do the NPV and IRR decision making rules agree? f. Sensitivity analysis i. Construct a cost of capital sensitivity table for all valuation types with costs of capital ranging from 11% to 15% in increments of 0.5%. That is fill in the following chart: Build OPTION COST OF CAPITAL SENSITIVTY Cost of Capital 11.0% 11.5% 12.0% 12.5% 13.0% 13.5% 14.0% 14.5% 15.0% NPV i. Construct 3x3 NPV and IRR Sensitivity Analyses reflecting the following information BUILD OPTION (A) NPV SENSITIVITY Brewing ingredient costs 30% 40% 50% Increase in restaurant sales 0% 15% 30% 3) Assuming the worst outcome for the craft brewery project (outcome A), which option should Upland choose: do nothing, lease to Diamond event, or open craft brewery? Why? 4) Assuming a good outcome for the craft brewery (outcome B), which of the two options (B.1. or B.2.) offers most value? Executive Summary Prepare a short (less than 1 page) executive summary that lays out major assumptions you used and what decision you have arrived at: should you go ahead with the leasing option, the craft brewery option, or leave the small building idle?
Business Research Methods RESE1170 Module Handbook 2024 1. Welcome message from your Module Leader Welcome to Business Research Methods (RESE-1170) at the University of Greenwich Faculty of Business. This module will provide you with a foundation in the philosophy and practice of business & management research methods. The skills you learn in this module will prepare you for your Dissertation (BUSI-0011) or Consultancy project next year, and beyond into employment where business and management decisions should be based on rigorous evidence. This module will achieve this aim by first seeking to develop your understanding of the philosophy of research methods. This foundation will be used to inform your understanding of the research-production process: formulating researchable questions, choosing appropriate data collection methods, sample selection, conducing textual and statistical analysis, and presenting the results professionally. This is a rigorous, conceptually-informed module which is designed to provide you with the knowledge of the theory and practice of business research methods. This knowledge will be developed through three learning activities. (1) Core reading. Each week you will be expected to complete a small amount of reading prior to participation in the workshops. This is often a single journal paper of a book chapter. However, additional readings will be available each week should you wish to further expand your knowledge in specific aspects of research methods theory and practice. (2) Lectures. Each week there will be a lecture on a specific topic related to business research methods. The lectures will draw on course readings, and illustrate academic concepts with real-life examples - often drawing on the research of academics within the department. (3) Workshop. This is a weekly 2-hour session which incorporates interactive activities, class-discussion, and debate. These sessions require preparation prior to the workshop. Please do ensure that you do this to get the maximum out of these sessions. The module has two assessments. These are: (1) Qualitative data analysis report (1,500 words, 50% of the module). (2) Quantitative data analysis report (1,500 words, 50% of the module). This handbook provides essential information about this module including the aims and learning outcomes, the schedule of teaching and learning activities, assessment tasks, resource recommendations and, if applicable, any additional resources that you will need. Please read it at the start of term so you are aware of key details and important dates. 3. Enquiry-Based Learning and Research-Led Teaching Enquiry-Based Learning (EBL) Defined as ‘an approach based on self-directed enquiry or investigation in which the student is actively engaged in the process of enquiry facilitated by a teacher. EBL uses real life scenarios (for example, from case studies, company visits, and project work) and students investigate topics of relevance that foster the skills of experimental design, data collection, critical analysis and problem-solving’ . This module requires students to use the skills developed in this module to design research questions, collect and analyze qualitative and quantitative data to make evidence-based recommendations. Research-Led Teaching (RLT) An element of Enquiry Based Learning links to RLT, which involves faculty introducing students to their own research where it is relevant to the curriculum being taught as well as drawing on their own knowledge of research developments in the field, introducing them to the work of other researchers. RLT sees students as active participants in the research process, not just as an audience. This is achieved by discussing such developments in lectures and classes, and setting reading lists including recent research publications at the frontier of the field. The definition of a diverse assessment regime at the programme level (incorporating an expectation of familiarity with, and use of, such publications in assignments) and the inclusion of projects at every level of the programme is also fundamental to achieving these objectives. This is achieved through developments in lectures and classes, and setting reading lists including recent research publications at the frontier of the field. The definition of a diverse assessment regime at the programme level (incorporating an expectation of familiarity with, and use of, such publications in assignments) and the inclusion of projects at every level of the programme is also fundamental to achieving these objectives. This module is driven by research design and analysis, and the research experience and skills of the teaching team will be utilised to demonstrate and illustrate best research practice to students. 4. Module details and learning outcomes Host faculty: Business Host school: BOS Number of credits: 15 Term(s) of delivery: Term 2 Site(s) of delivery: Greenwich Aims: This module aims to provide students with a solid understanding of the philosophy and practice of business and management research methods. The skills students learn in this course will prepare them for their dissertation, and beyond into employment. This module will achieve this aim by first seeking to develop students’ appreciation of the philosophy of research methods. This foundation will be used to inform students’ understanding of the research-production process: formulating researchable questions, choosing appropriate data collection methods, sample selection, conducting statistical and textual analysis. Learning Outcomes*: On successful completion of this module, students will be able to: 1. Distinguish between the two principle research methodological paradigms (i.e. qualitative and quantitative research methods), and understand their underpinning philosophical assumptions, strengths, and weaknesses. 2. Describe the strengths and limitations of qualitative and quantitative research methods. 3. Design research instruments (i.e. survey and interview schedule). 4. Conduct basic quantitative and qualitative analyses. 5. Generate evidence-based conclusions, decisions and recommendations. * A learning outcome is a subject-specific statement that defines the learning to be achieved through completing this module. Glossary: • A pre-requisite module is one that must have been completed successfully before taking this module. • A co-requisite module is one that must be taken alongside this module. • A learning outcome is a subject-specific statement that defines the learning to be achieved through completing this module. 5. Employability Upon successful completion of this module, students will gain several employable skills. The clearest of these is cognitive skills. This module will expose you to different sorts of problems, make reasoned and well-justified judgements on how to approach the problem, pay careful attention to detail, and make evidence-based conclusions. Along with cognitive abilities, this module develops students’ technical ability around research design within business and management. This includes research design which may be used by both companies and consultancy firms to examine business challenges, design research to collect and analyse data, and to use this analysis to create evidence-based recommendations. You can find out more about the Greenwich Employability Passport at: Greenwich Employability Passport for students. Information about the Career Centre is available at:Employability and Careers | University of Greenwich. You can also use LinkedIn Learning to gain access to thousands of expert-led courses to support your ongoing personal development. More information can be found at:LinkedIn learning | IT and library services 8. Assessments Assessment 1: Qualitative data analysis This assignment is worth 50% of the overall module grade, and has a limit of 1,500 words. Your task is to write a short research design and data analysis for a qualitative project. This assessment mirrors the type of information that is presented in the ‘methods’ section of academic journal papers, government reports, management & business consultancy reports, and other documents which are based on research. The qualitative research that you will write about is on the topic: “students’ experiences of negotiating term-time work and study” . You will write the research design (mirroring a methodology chapter), and present findings and conclusions in answering a research question of your choice. The assessment mirrors the learning outcomes of this module. In this assessment, you will: 1. Describe the qualitative methodological paradigm, and its underpinning philosophical assumptions, its strengths and limitations. 2. Describe the methods of data collection and analysis. 3. Design research instruments (i.e. interview schedule). 4. Conduct basic qualitative analyses. 5. Generate evidence-based conclusions, decisions and recommendations. In order to achieve this, consider following this template provided in Moodle. At the start of the module, this may seem like a lot of work, but it is really achievable so long as you come to the tutorials. Each week we will have a 2-hour workshop together, and in this workshop we will be going through each step of the research process together. For example, in one week we will use the workshop to design an series of interview questions (called a ‘schedule’). In another week you will use this schedule to interview a classmate. In another week we will analyse that interview, and so on. In other words, so long as you come to the workshop, you will be working towards your assessment with your tutor, in class time. Your work should be supported by at least 5 academic references, which can include the core textbook. Formative task 1: Qualitative data analysis Between weeks 4-6 you will design an interview schedule, use that schedule to interview a classmate, transcribe and code the interview. It is important that you complete these tasks (see Moodle for each of these tasks in their respective weeks). In week 6 you will be invited to a feedback workshop with you tutor. You should come prepared with your interview material (the schedule and a coded transcript) as well a 1 page outline of your key points of your assessment. Marking criteria % of assessment Methodological rigor - The description of the methodology, method of data collection, and method of data analysis are clearly outlined and justified. Strengths and weaknesses of the method are considered. 35% Data analysis – The submission shows evidence of a thematic / statistical analysis of the dataset. Clear and professionally presented evidence is given to support key findings. 35% Academic literature – This work draws on references to the academic literature to support claims, and outline the strengths and limits of the methods. The submission should include references to at least 5 academic articles or methods-based texts. The work is thoroughly and properly referenced, using the Harvard Style. 15% Academic expression – The submission is well structured, presents the work in a logical and coherent manner. The standard of English expression should be strong; with correct grammar, spelling, and punctuation. 15% Assessment 2: Quantitative Data Analysis This assignment is worth 50% of the overall module grade, and has a limit of 1,500 words. Your task is to write a short research design and data analysis for a quantitative project. This assessment mirrors the type of information that is presented in the ‘methods’ section of academic journal papers, government reports, management & business consultancy reports, and other documents which are based on research. This assessment also mirrors that which you did for qualitative work, except this time there are statistics! There is a template on Moodle for you to use to help you structure this work. The quantitative research that you will write about is on the topic of working from home at a fictional company, Lomond Insurance. Details of the case, and the Excel dataset, are provided on Moodle under the “assessment” tab. The assessment is designed to meet the learning outcomes. Your assessment should include the following four components: 1. Describes the quantitative methodological paradigm, and its underpinning philosophical assumptions, strengths, and limitations. 2. Describe the research sample. 3. Conduct a statistical analysis. 4. Generate evidence-based conclusions and recommendations. The quantitative analysis that you will complete will comprise of the following statistical tests: Descriptive statistics, Students’ T-test, and a Pearson’s Correlation Coefficient In order to meet these assessment components, you will be guided, week-by-week in each the workshops, through each of the analysis stages. • Workshop 1 & 2 outline the philosophy of research, and the principals of quantitative research. By participating in these workshops you will be able to complete component 1 of the assessment. • Workshop 3 outlines research design, focusing on research samples. By participating in this workshop, you will be able to describe component 2 of the assessment. • Workshops 8 - 11 are held in the IT labs. These workshops will guide you, week- by-week through the statistical analysis that you will need to complete the assessment. Within the workshop you will practice the principals of the statistical tests in a smaller dataset. You will then have the opportunity to work on the assessment, in class, under the supervision of your tutor. This will help you complete component 3 of the assessment. • Workshop 12 outlines how to present statistical evidence, and draw evidence- based conclusions. This workshop will allow you to complete component 4. Some helpful pointers: • This assessment is a mini research project. It is going to be difficult to complete this in a single day. Do not leave this assessment to the last day before starting to work on it. • After each workshop you will be given clear instructions on what you can do to help boost your grades. You may be given specific tasks to complete or chapters to read. Please ensure that you do this to maximise your grades. • You should least 2 academic references to support your assessment submission, which can include the core textbook. Do not reference random websites. Use quality references. • There is a template on Moodle which provides an outline of how your assessment should be structured. Please consider using this template to help structure your writing so that you are including all the 4 components requires for this submission. Formative assessment Between weeks 8-11 the workshops will be held in an IT lab where you will have the opportunity to work on your assessment under the supervision of your tutor. Please ensure that you come to the workshop to make sure that you are making progress on your assessment. Take onboard any guidance and advice provided by your tutor. In week 12 you will be invited to a feedback workshop with you tutor. Marking criteria % of assessment Methodological rigour - The description of the methodology, method of data collection, and method of data analysis are clearly outlined and justified. Strengths and weaknesses of the method are considered. 35% Data analysis – The submission shows evidence of a thematic / statistical analysis of the dataset. Clear and professionally presented evidence is given to support key findings. 35% Academic literature – This work draws on references to the academic literature to support claims, and outline the strengths and limits of the methods. The submission should include references to at least 5 academic articles or methods-based texts. The work is thoroughly and properly referenced, using the Harvard Style. 15% Academic expression – The submission is well structured, presents the work in a logical and coherent manner. The standard of English expression should be strong; with correct grammar, spelling, and punctuation. 15%