Department of Electrical and Electronic Engineering EIE2105 Digital and Computer Systems Tutorial 5: Sequential Logic II Q1. Derive the input equations, the output equation, the state table and the state diagram of the following circuit. Q2. You are requested to design a sequential logic circuit with D flip-flops for a system whose state diagram is given as below. (a) Determine the following parameters of the system to be realized: number of states, number of system inputs, and number of system outputs. (b) Derive the corresponding state table for the system. (c) How many Flip Flops are required to construct this system? (d) Derive the input equations and the output equations of the system. (e) Design the circuit and draw a circuit diagram to show your design. You can only use D Flip-Flops, AND, OR and NOT gates in your design. (f) How many unused states are there in the system? (g) Theoretically, the system output values and the next state values for unused states are don’t care. Is it really true that we can assign any binary value (0 or 1) to the output values and the next-state values of these unused states? Why? (h) State 10 is an unused state of the system. Suppose I assign its next-state values as follows. Is it acceptable? n. state DA D B 1 0 1 Value assignment for an unused state
Assignment 1 Q1. Write a MATLAB function called match that implements the histogram matching algorithm for 8-bit images that was described in class. The function header should have the form. function im2 = match(im,h) where im is a uint8 intensity image, h is the 256-bin histogram to be matched and im2 is the output image. You can use any built-in MATLAB functions in your code except for histeq. Histogram matching can be used to perform. histogram equalization by appropriate specification of the desired histogram. Equalize the uint8 intensity image pout.tif using the function match and include a printout of the output in your report. Also include a plot of the vector h that you used as input.
EBUS612 (Postgraduates) E-Business Enterprise System with SAP 2024 - 2025 (Semester 2) Practical Assignment with SAP (50% of module mark) Submission deadline: 12:00pm 15/05/2025 IMORTANT NOTICE: This is an individual assignment and your final report needs to be submitted via Turnitin. In your submission, YOU NEED TO STATE YOUR SAP SYSTEM ID FIRST! 1. Results from your three case studies (50%) a. Imaging you are managing a new material called ABC-brake set. It only can be produced in house. This item will also be further used for direct sales. Please identify what views need to be selected when you create its material master data in SAP and also specify what organizational level(s) should be associated with each view and explain your reasons respectively? (2 pages) (10%) b. Drawing from your PP case study, critically reflect how SAP drives the material requirements and production automation from demand management through various integration points. Clearly articulate the key benefits SAP offers, supporting your analysis with evidence from your case study. Additionally, connect your discussion to broader, credible resources to explore how these functions can support real-world businesses across different scenarios. (5 pages) (15%) c. Discuss the key differences between CO (Controlling) and FI (Financial Accounting) in SAP. How are these two modules integrated within the system? Use appropriate evidence from your system to support your analysis. Additionally, if you intend to group multiple company codes under the same controlling area, what conditions must be met? (1.5 pages) (5%) d. After completing your SD case study, refer to the discussions from the MM case study to identify all steps that impact the company’s general ledger. Support your analysis with relevant accounting documents and critically examine which accounts should be credited and debited, providing clear justifications for your reasoning. (4 pages) (20%) 2. Extension after your three case studies (50%) You have already allocated the cafeteria costs to the cost centers. Now, you need to assess the R&D expenses to the Assembly and Maintenance cost centers. The R&D costs are incurred to support the total output of these cost centers. To proceed, collect the accrued R&D costs of 120,000 USD in a suitable new cost center (e.g., Res1###) within the N4000 hierarchy area, using cost element 6340000. Then, allocate these costs to the receiving cost centers by creating appropriate statistical key figures. Additionally, apart from the existing activity type (A###) output from the Assembly cost center, an additional activity type (B###) is introduced, contributing a total of 1,000 hours. The corresponding allocation cost element is 8000000, and the increased output of activity B### results in an additional 30,000 USD in salary expenses for employees in the Assembly cost center. a. Determine the new total activity price (unit price) for all activities (M###, A###, and B###). Provide a screenshot of the costing report for your Assembly cost center and briefly discuss the line items originating from the new cost assessment. (2pages) (10%); b. Manually calculate the overall cost assessment process with appropriate explanations and flowcharts. (3 pages) (15%) c. Suppose your Maintenance cost center (MAIN###) now generates an additional 700 hours of repair output (R###, the associated allocation cost element is 803###4), where 50 hours are used by Res1### and 100 hours by ASSY###. Provide an updated costing report for the Assembly cost center and discuss the key changes. (2 pages) (10%) After completing the Product Costing case study, you decide to enhance product quality by adding two additional Chains and Brake kits to your final product DXTR4###. Additionally, some production activities have been relocated to a new work center (WK1###), specifically for the first and fifth production operations. The first operation includes 5 minutes of Setup and 15 minutes of Labor. The fifth operation includes an additional 3 minutes of Setup and 5 minutes of Labor. d. Demonstrate the screenshot of your new product cost estimate with itemized view and develop a discussion about this new report (e.g. How is this report structured? What are the main changes and what are the root causes of those changes? Etc.). You use information (hint: activity price in CO) from your system to support your analysis (4 pages) (15%) To create the new work center, the following tasks are needed: 1. Create a Work Center: o Use the SAP search bar to find "Create Work Center." o Enter the following details: Plant: DL00 Work Center: WK1### (replace ### with your SAP ID) o In the Copy From section, enter: Plant: DL00 Work Center: ASSY1000 o On the next screen, select the following data to be copied: Basic Data, Texts, Classification, Subsystem Grouping, Default Values, Capacities, Scheduling, and Costing. Click Copy at the bottom right. o In the Technology tab, set: Machine Type: 0003 Sort String: 0003 CAPP Planner Group: 000 o Click the Costing tab and ignore the pop-up message and press ENTER. o Change the cost center to your Assembly cost center (ASSY1###) and assign activity type B### to Setup and Labor in the Activities Overview. Click Save to complete the creation of the work center.
Department of Economics ECN 410: Economics of Migration Spring 2025 Course overview Migration is and has been an important phenomenon in the U.S. and throughout the world. This course will provide students with an overview of the different types of migration, important past and current episodes of migration, and policies dealing with migration. The course will also apply the tools of economics to understand several aspects of migration including: its determinants, its effects on source and destination regions, and the intended and unintended consequences of migration policy. We will begin by discussing international migration. We will discuss why people migrate to other countries, where they come from, and where they go, and why they go there. We will also study how migration impacts the places migrants leave and the places where they arrive. Understanding the research documenting these consequences will help us as we later evaluate past and current immigration policies in the U.S. We have 14 weeks to cover a great deal of material. It is important you stay on top of the readings, problem sets, and quizzes. I also encourage students to take advantage of my office hours to review material. Course objectives . Explain basic facts about important past and current episodes of migration . Understand methods and results for research on different aspects of migration . Evaluate policies dealing with migration . Develop analytical writing skills through the format of policy briefs Background and prerequisites This course is primarily aimed at students who have completed the intro (ECN 101 and 102) and intermediate micro and macro sequences (ECN 301 and ECN 302). This background will be helpful when we discuss models of migrant selection and the labor market impacts of migration. Familiarity with regression analysis will also be useful as we delve into the empirical strategies researchers employ to identify causal effects. I want to stress that students will not be expected to fully understand the methodological details. Instead, we will focus on understanding the basic intuition behind these strategies and how they may (or may not) provide reliable estimates of the causal effects of some policy, phenomena, or intervention. Readings The required textbook for this course is: Cynthia Bansak, Nicole Simpson, and Madeline Zavodny (2021). The Economics of Immigration, 2nd Edition This textbook is available online through Syracuse University Libraries. There are also required articles (see tentative dates for those readings in the course outline below). All these articles have links or can be accessed through the Syracuse University Libraries website. Generally, you can access academic articles when connected to SU’s Wi-Fi network. Please let me know if you have any issue accessing these articles. The schedule for the readings can be found in the course calendar below. It is strongly recommended that you read the readings each week before class to have some familiarity with the concepts to be discussed. Quizzes can cover topics covered in lecture and in assigned readings. Evaluation ● Problem sets (10%) ○ Four problems sets will be assigned and will include a mixture of quantitative problems, short answer questions, and reading responses. The due dates for these four problem sets are available in the “ECN 410 Important Dates and Deadlines” document on Blackboard. ○ The problem sets will be posted on Blackboard in the “Problem Sets” folder. You will upload completed problem sets to our course Blackboard. ○ Homework is graded based on completeness. I encourage you to work with other students on your problem sets, but all explanations must be in your own words and fully show how you reached your answer. If you turn in a problem set that is too similar to another student’s work, you will both receive zeros. ○ Grades are assigned on a scale of 0 to 10. Homework submitted after the deadline will be considered late. Late submissions will incur a deduction of two points for every 24-hour period following the deadline, with the minimum achievable score set at 0. ● Quizzes (15%) ○ We will have 7 quizzes. The tentative dates for these quizzes are available in the “ECN 410 Important Dates and Deadlines” document on Blackboard. ○ Quizzes will take place during the first 15 minutes of class and will cover material from the previous lectures. These quizzes are not meant to be difficult or cause you any significant stress but instead should serve to keep up with recent material. ○ Each quiz will be worth 10 points. I will drop the student’s lowest quiz score at the end of the semester. ● Migration Policy Project (30%) ○ At the end of the second week of class, you and a small group of students will choose a bilateral migration channel (I can also assign you a channel). With this group, you will be responsible for writing one short policy brief and one longer, more polished policy brief about your migration channel. ○ The first policy brief will address who migrates, why they migrate, and what they do in the destination country. It will also examine the effects of migration on the migrant-origin country and/or the destination country. The first policy brief is due Wednesday, March 26th. ○ Each group will record a 15-minute presentation via Zoom which educates the class about your assigned migration channel, drawing on the findings from your first policy brief. Additionally, you will "pitch" a policy recommendation for either the migrant-origin country, the migrant-destination country, or both, addressing the key issues you have identified. You are to upload your group’s recording to Blackboard by April 9th. We will provide further instructions on Blackboard. After the presentation, you will receive feedback from your peers, which can be incorporated into your final policy brief. ○ Your final brief will polish the analysis from your first brief and will also incorporate your policy recommendation. I will provide further details of the assignment and a grading rubric on Blackboard. The final brief is due Monday, April 21st. ● Midterm (20%) ○ The midterm is currently scheduled for Thursday, March 6th during class. ● Final (20%) ○ The final is currently scheduled for Friday, May 2nd from 8:00 to 10:00am. The final exam date and time is scheduled by the registrar. ● Participation (5%) ○ Students that ask questions and participate actively in the in-class activities throughout the term will receive higher grades in this category. ○ Attendance will be taken throughout the semester and will be incorporated into your participation score. That being said, I do not want anyone to come to class if they are not feeling well. Please reach out to the Barnes Center and Student Outreach and Support to have your absence excused. ● Letter breakdowns: A=94-100, A-=90-93, B+=87-89, B=84-86, B-=80-83, C+=77-79, C=74-76, C-=70- 73, D=60-69, F=
BCSC/CSC/DSCC 229, DSCC 449: Computer Models of Human Perception and Cognition Homework Assignment #2 Instructions: Answer all questions below. Include all requested calculations and graphs. Also include the Python code that you wrote to answer the questions. When writing text or equations, please write NEATLY! (0) (Part A) At the top of the document you turn in, place your name and the date. (Part B) Next, please take the honor pledge. That is, write (by hand using a pen): “I affirm that I have not given or received any unauthorized help on this assignment, and that this work is my own.” Then sign your name. (1) [WARNING: This problem is mathematically challenging. Don’t be surprised if you struggle with it. Indeed, it may be smart to first work on other homework problems, and then return to this problem if time permits.] (Adapted from Problem 3.4 from an early draft of the textbook by Ma, K¨ording, and Goldreich) Many Bayesian inference problems involve a product of two or more Gaussians (also known as normal distributions). A convenient property of Gaussians is that their product is also Gaussian. In this problem, we will lead you through an example to derive this property yourself. Consider an observer who infers a stimulus s from a measurement x (this is the scenario we considered in the lecture titled “Building a Bayesian Model”). Suppose the stimulus distribution p(s) (this is the prior distribution) is a Gaussian with mean µ and standard deviation σs, and suppose the measurement distribution p(x|s) (this is the likelihood function) is a Gaussian distribution with mean s and standard deviation σ. (a) Write down the equations for p(s) and p(x|s). (b) Use Bayes’ rule to write down the equation for the posterior distribution p(s|x). Substitute p(x|s) and p(s), but do not simplify. The numerator is a product of two Gaussians. The denominator p(x) is a normalization factor that ensures that the integral of the posterior distribution equals 1. For now, we will ignore the denominator and focus on the numerator. (c) Apply the rule eA eB = e A+B to simplify the numerator. (d) Expand the two quadratic terms in the exponent. (e) Rewrite the exponent to the form. as2 + bs + c, where a, b, and c are constants. (f) Rewrite the expression you obtained in (e) in a simpler form, ec1(s+c2)2+c3 , with c1, c2, and c3 constants. [Hint: any quadratic function of the form. as2 + bs + c can be written as This re-writing is known as “completing the square”.] (g) Now rewrite your expression into the form. Express µcombined and σcombined in terms of x, σ, µ, and σs. [Hint: Recall we’re considering the “Normal-Normal” model. In this model, the posterior distribution is a normal distribution with mean µcombined and variance σcombined 2 . Recall the formulas for µcombined and σcombined 2 .] (h) Recall that p(s|x) is a distribution and that its integral should therefore be equal to 1. However, the expression that you obtained in (g) is not properly normalized because we ignored p(x). Modify the expression such that it is properly normalized, but without explicitly calculating p(x). [Hint: Note that e Z is a constant that does not depend on s?] (2) (Adapted from Problem 3.2 from an early draft of the textbook by Ma, K¨ording, and Goldreich) In this problem, we numerically calculate a posterior distribution. Suppose the stimulus distribution p(s) is Gaussian with mean 20 and standard deviation 4. The measure-ment distribution p(x|s) is Gaussian with standard deviation σ = 5. A Bayesian observer infers s from an observed measurement x = 30. We are now going to calculate the posterior probability density using numerical methods (coded in Python). Figure 1: Example of a graph for Question 2 showing a likelihood function, a prior distribution (values sum to 1/stepsize), and a posterior distribution (normalized and then scaled so its values sum to 1/stepsize). (a) Define a vector of hypothesized stimulus values s: (0, 0.2, 0.4, . . . , 40). (b) Compute the likelihood function p(x|s) and the prior p(s) on this vector of s values. [Hint: Without extra work, the values of the prior distribution will not sum to one (instead, they should sum to 1/stepsize where stepsize = 0.2). That is because we are approximating a continuous distribution by a discrete distribution. In addition, the values of the likelihood function will not sum to one. (But keep in mind that the likelihood function is not a distribution, and thus its values do not need to sum to one.)] (c) Multiply the likelihood and the prior elementwise. In Python, elementwise multiplication of two vectors can be achieved using the “*” command. (d) Divide this product by its sum over all s (normalization step). This step yields the posterior distribution p(s|x). After completing this step, the posterior probabilities will sum to one. (e) Convert this posterior probability mass function into a probability density function by dividing by the step size you used in your vector of s-values (e.g., 0.2). (Before this conversion, the values of the posterior will sum to 1. After this conversion, the values of the posterior will sum to 1/stepsize.) (f) Plot the likelihood, prior (values sum to 1/stepsize), and posterior (normalized and then scaled so its values sum to 1/stepsize) in the same plot. [Hint: See Figure 1.] (g) Is the posterior wider or narrower than the likelihood and prior? Do you expect this based on the equations we discussed? (h) Change the standard deviation of the measurement distribution to a very large value (e.g., σ = 10) and redraw your plot. What happens to the posterior? Can you explain this? (i) Change the standard deviation of the measurement distribution to a very small value (e.g., σ = 1) and redraw your plot. What happens to the posterior? Can you explain this? (3) (Adapted from an early draft of the textbook by Ma, K¨ording, and Goldreich) Repeat Question (2), but instead of starting with a value of the measurement x, start with a value of the stimulus s = 10. Based on this value of s, draw a random value of x from the measurement distribution p(x|s) (with σ = 5). (Although you know the true value of s, pretend you don’t know this value. Your goal is to infer the posterior distribution of s given this value of x.) Repeat this process nine times (each time you will sample x and then infer posterior p(s|x)). Make a figure with nine graphs, each graph showing the likelihood function, prior distribution (values sum to 1/stepsize), and posterior distribution (normalized and then scaled so its values sum to 1/stepsize) for an individual repetition. You should observe that, repetition to repetition, the likelihood function and posterior probability density function “jump around”. Observe how the posterior shifts under the influence of the “jumping” likelihood function and stationary prior. Explain. (4) (Adapted from an early draft of the textbook by Ma, K¨ording, and Goldreich) Continuing from Questions (2) and (3), generate a distribution of maximum-a-posteriori (MAP) and maximum likelihood (ML) estimates by: (a) drawing a single s from the stimulus distribution p(s); (b) drawing a single x from the measurement distribution p(x|s) (with σ = 5), and calculating the posterior distribution p(s|x) (normalized and scaled so that its values sum to 1/stepsize). (c) For each of 1000 repetitions of (a) and (b), show a scatter plot of the MAP estimate (y-axis) against the true stimulus s (x-axis). On a separate graph, plot the MLE (i.e., measurement x) against the true stimulus s. [Hint: See Figure 2.] Figure 2: (Left) Scatter plot of the MAP estimate (y-axis) against the true stimulus s (x-axis). (Right) Scatter plot of the MLE estimate (y-axis) against the true stimulus s (x-axis). Each plot contains 1000 dots, one dot for each repetition. (d) Repeat (a), (b), and (c) twice. In the first repetition, make the measurement standard deviation large (σ = 10). In the second repetition, make the measurement standard deviation small (σ = 1). When this standard deviation is small, the MAP and MLE plots should look similar. Why? When this standard deviation is large, the MAP plot looks flat, whereas the MLE plot looks very scattered. Why? (5) (Adapted from Problem 5.7 from an early draft of the textbook by Ma, K¨ording, and Goldreich) In Chapters 3, 4, and 5, we were able to derive analytical expressions for the posterior distribution and the response distribution. For more complex psychophysical tasks, however, analytical solutions often do not exist but we can still use numerical methods. To gain familiarity with such methods, we will work through the cue combination model of Chapter 5 using numerical methods. We assume that the experimenter introduces a cue conflict between the auditory and the visual stimulus: sA = 5 and sV = 10. The standard deviations of the auditory and of the visual noise are σA = 2 and σV = 1, respectively. We assume a flat (uniform) prior over s. (a) Randomly draw an auditory measurement xA and a visual measurement xV from their respective measurement distributions. (It’s okay if a measurement has a negative value.) (b) Plot the corresponding elementary likelihood functions, p(xA|s) and p(xV |s), in one sfigure. (Recall that likelihood functions do not need to be normalized.) For the range of possible s values, generate an array of values that start at -10, end at 30, and have a stepsize of 0.2. (c) Calculate the combined likelihood function, p(xA, xV |s), by numerically multiplying the elementary likelihood functions in Python. Plot this function. (Again, this function does not need to be normalized.) (d) Calculate the posterior distribution p(s|xA, xV) by normalizing and then scaling the combined likelihood function. (The posterior distribution p(s|xA, xV) should be normalized and then scaled so that its values sum to 1/stepsize.) Plot this distribution in the same figure as the likelihood functions. (e) Use Python to find the MAP estimate of s (i.e., the value of s at which the posterior distribution p(s|x) is maximal). (f) Compare with the MAP estimate of s computed with an analytic equation using the measurements drawn in (a). For convenience, here is the analytic equation: (g) In the above, we simulated a single trial and computed the observer’s MAP estimate of s, given the noisy measurements xA and xV on that trial. If an analytical solution does not exist for the distribution of MAP estimates, we can repeat the above procedure many times to approximate this distribution. Here, we practice this method even though an analytical solution is available in this case. Draw 100 pairs (xA, xV ) and numerically compute the observer’s MAP estimate of s for each pair as in (e). (h) Compute the mean of the MAP estimates obtained in (g) and compare with the mean Figure 3: (Left) Histogram representation of p(sMAP|xA, xV) based on 100 MAP estimates. estimate predicted by an analytic equation that uses the true values for sA and sV: (i) Make a histogram representation of p(sMAP|xA, xV) based on the 100 MAP estimates obtained in (g) (in Python, use the “numpy.histogram” function). [Hint: See Figure 3.] (j) Relative auditory bias is defined as the mean MAP estimate minus the true auditory stimulus, divided by the true visual stimulus minus the true auditory stimulus. Compute relative auditory bias for your estimates.
Numerical Computing, CSCI-UA 421 Spring 2025 Course Instruction Mode This course will be taught in person. Lectures will be given using the classroom blackboard and computer demos. Participation in the class is required, and you will be expected to respond to questions via electronic polls. The topic of each class will be posted on this web page, along with references to the relevant parts of the textbooks as well as class notes. Don't hesitate to ask questions during lectures, either by raising your hand or directly speaking out if I don't notice you. . Small Group Meetings At the start of the semester I would like to meet all students in small groups in my office. The purpose of these meetings is for me to get to know you and also for you to meet some other students in the class. Please sign up here. If none of the times work for you, send me email. ● Office Hours I will hold regular office hours on Wednesdays at 3-4 p.m. in CIWW 429, starting Jan 29. If this time does not work for you, send me email to set up an appointment, or just try dropping by my office any time. Tutor Ellen Persson, a Ph.D. student in mathematics, will be a tutor for this class five hours per week. Starting Jan 27, she will hold office hours on Mondays, Wednesdays at 1-2pm and Tuesdays, Thursdays at 10-11am, all in CIWW 412, and online by appointment if necessary (send email to [email protected]) . . Course Summary Introduction to numerical computation: the need for floating-point arithmetic, the IEEE floating-point standard, correctly rounded arithmetic, exceptions. Conditioning and stability. Direct methods for numerical linear algebra (Gauss elimination (LU factorization) and Cholesky factorization for systems of linear equations, normal equations and QR factorization for least squares problems) . Eigenvalues and singular values. Iterative methods (Newton's method for solving a single nonlinear equation or a system of nonlinear equations). Discretization methods (approximating a derivative, solving a differential equation with boundary conditions). Polynomial interpolation and cubic splines. Convex optimization: the gradient method and Newton's method. Importance of numerical computing in a wide variety of scientific applications. How can you tell how accurate your answers are? We will use the computer a lot in class and you should become quite proficient with MATLAB by the end of the course. If you like math as well as programming, you should enjoy this class! . Prerequisites Computer Systems Organization (CSCI-UA 201), either Calculus I (MATH-UA 121) or both of Mathematics for Economics I and II (MATH-UA 211 and 212), and Linear Algebra (MATH-UA 140), or permission of instructor. Knowledge of MATLAB in advance is not expected. The Linear Algebra prerequisite is particularly important; if you are not sure if you have enough background, send me email to discuss this. If you have already taken the math department's Numerical Analysis course, you will find there is a lot of overlap with this course; if you are not sure if you should take the course anyway, please send me email to discuss this. . Requirements o Attend class, responding to polls (10% of the final grade) (if you are unable to attend a class, because of sickness or for another reason, please let me know by email) o Read the assigned chapters from the two text books, and other assigned notes o Do the homework (30% of the final grade) o Write the midterm exam and final exam (each 30% of the final grade) . Required Text Books o Numerical Computing with IEEE Floating Point Arithmetic, by the instructor The first edition of this book was published by SIAM in 2001, and my web page for the first edition is here . Although the basic principles of IEEE floating point arithmetic have not changed much since 2001, the technology implementing them has. I have just finished writing the second edition of this book, which will be published by SIAM later this year. You can access the final draft of the second edition here but please do not post this anywhere. You will be expected to read Chapters 4, 5, 6, 7, 11, 12 and 13, and I will ask questions about these chapters in the homeworks and midterm exam, but I recommend that you read the whole book (it is only 140 pages, and you may find Chapter 15, which is about the new floating point formats for AI, quite interesting). If you find typos or have other comments please send them to me by email. o A First Course on Numerical Methods, by Uri Ascher and Chen Greif This book is currently being revised by the authors. The revised chapters will be posted here as we get to the relevant topics. Chapter 1 (good to read, but not required as this is mostly covered in my book) Chapter 2 (not required now, we'll cover this later) Chapter 3 (linear algebra background, read except as noted) Chapter 4 (direct methods for linear systems, important to read this) Chapter 5 (please read Section 5.1 only) Chapter 6 (please read, except as noted in pdf ) o Required Software MATLAB: available for free to NYU students There are many books on MATLAB; I recommend MATLAB Guide, by D.J. Higham and N.J. Higham, SIAM, 2000. Chapters can be freely downloaded via the NYU library subscription using this link. o Class Forum The class forum will use the Ed Discussion tool in Brightspace. Feel free to ask questions about the class and homework here, either to everyone, or just to me and the graders, and feel free to answer questions posed by other students. However, don't post solutions to the homework! Helpful hints are ok. Please participate! o Lecture Schedule and Notes (future dates are tentative) 1. Tue Jan 21: Introduction and overview, IEEE floating point representation (my book, Ch 1-4), notes 2. Thu Jan 23: Rounding, absolute and relative rounding errors (my book, Ch 5), notes 3. Tue Jan 28: Two loop programs (see Ch 10),correctly rounded floating point operations, exceptions (my book, Ch 6-7), notes , mfiles firstLoopProgram.m , secondLoopProgram.m , parRes.m , roundModes.m 4. Thu Jan 30: Floating point microprocessor and programming language support for the standard, cancellation, approximating a derivative by a difference quotient (my book, Ch 8,9,11), notes , mfile differenceQuotientErrors.m 5. Tue Feb 4: Conditioning of problems, intro to stability of algorithms (my book, Ch 12-13), notes , mfiles condNumExpt.m , compoundInterest.m 6. Thu Feb 6: More on stability of algorithms (my book, Ch 13), notes , mfile approxExp.m 7. Tue Feb 11: Linear algebra review: linear independence of vectors, conditions for square matrix to be nonsingular, matrix rank. Vector and matrix norms, computing the matrix ∞ norm (AG, Ch 3). Skip sections on positive definite matrices, orthogonal matrices, eigenvalues, singular values, SVD and differential equations for now (we will return to these later), notes 8. Thu Feb 13: Polynomial interpolation via Vandermonde matrices (AG, sec 3.5). Matlab's (backslash). Solving linear systems of equations (AG, sec 4.1-4.2): back substitution, Gaussian elimination without pivoting, equivalence to LU factorization (decomposition), notes , mfiles plotInterpPoly.m , gauss_el.m , backsolve.m NO CLASS on Tue Feb 18 9. Thu Feb 20: more details on the LU factorization (AG, sec 4.2), Gauss elimination with partial pivoting (GEPP) and the PA=LU factorization (AG, sec 4.3), notes 10. Tue Feb 25: Continuation of GEPP and its equivalence to PA=LU, demo that on random matrices, GEPP is stable, rare worst case instability of GEPP (AG, sec 4.3), banded matrices (AG, sec 5.1), Matlab's sparse matrices, notes , mfiles geppUnstableExample.m , geppUnstableExampleDemo.m 11. Thu Feb 27: Cholesky factorization of positive definite matrices. Use my Cholesky notes instead of AG sec 4.4. notes , mfile chol3.m 12. Tue Mar 4: Least squares: the normal equations and solution by Cholesky factorization (AG, sec 6.1, p.173-183), orthogonal vectors and matrices (AG, p. 87-88), notes , 13. Thu Mar 6: Least squares: solution via QR factorization using Householder reflections (AG, sec 6.2 and 6.3, but skip the subsection on the Gram-Schmidt process which is classical but not used much, and skip the part on the SVD and the pseudo-inverse for now), my notes on QR , notes 14. Tue Mar 11: Errors, residuals, condition numbers and stability for solving Ax=b (AG, sec 4.5, p.132--134), notes , mfiles errorResidualConditioningStabilityDemo.m , getIllCondRandomSymPosDefMtx.m 15. Thu Mar 13: TBA, release practice midterm 16. Tue Mar 18: Review of practice midterm 17. Thu Mar 20: Midterm Exam. No notes, laptops, phones or other devices permitted. o Exams The midterm will cover all lecture topics before spring break. The focus will be on: my book chapters 4-7 and 11-13,AG chapters 4-6 (skipping sections as noted in the PDF notes), with emphasis on the topics covered in the homeworks, but also other topics covered in class. You will be asked to write MATLAB code in some questions. The midterm will be in class on Thu Mar 20, 11-12:15, CIWW 101. No notes, laptops, phones or other devices permitted. The final exam will focus mostly on the topics discussed after spring break, with an emphasis on the topics covered in the homeworks. You will be asked to write MATLAB code in some questions. The final exam will be Wed May 7 ("reading day"), 12:00--1:50 pm, CIWW 101. No notes, laptops, phones or other devices permitted. o Homework There will be 8 homework assignments, 5 before spring break and 3 after. If you have questions about the homework, please post them on Ed Discussion, where the tutor or I, or maybe other students if they wish, can answer them. It is important that you do the homework mostly by yourself (not jointly with another student), but when you get stuck, I encourage you to consult with other students, the class tutor or me, or the web, or even AI tools, to get help when necessary. However, when you get help from ANY of these sources, or any other source, or give help to other students, it's important to acknowledge that in writing in your homework submission, explicitly explaining how much help you got and how much of the work you did yourself. Submitting work not done by you as if it were your own is called plagiarism and is not acceptable. For more information, see the CS department's policy on integrity. Penalty for not acknowledging your sources: zero for the homework. Finally, remember that if you don't mostly do the homework yourself, you will not learn the skills you need to be able to pass the midterm and final exams . Homework assignments will be posted here but you should submit your homework on Gradescope. Homework is due at 11:59 pm on the given date. Late homework will be penalized 10% if just one day late, and 20% if between two and seven days late. Homework will not be accepted more than one week late, except in special circumstances. If you have questions about the grading of the homework, post a private question to me and the grader on Ed Discussion. Homework 1, posted Jan 23, due Jan 30 Homework 2, posted Jan 30, due Feb 11 Homework 3, posted Feb 11, due Feb 20 Homework 4, posted Feb 20, due Mar 4 Homework 5, posted Mar 4, due Mar 13 o Don't Hesitate to Ask for Help If you have questions, ask them in class or on the class forum, come to my office hour or send me email to set up an appointment. Don't wait until it's too late!
CSCI 421 Numerical Computing, Spring 2025 Homework Assignment 4 1. (Taken from AG Ex 4.9, reworded for clarity). Let Carry out all the steps of GEPP (Gauss Elimination with Partial Piv- oting) on this matrix, writing out the matrices M(i), P(i) , i = 1, 2, 3, giving the upper triangular matrix Also determine the matrices M(˜)(i) , i = 1, 2, 3, which should, like the M(i), be unit lower triangular (see AG, p. 121). Then compute and L = M(˜)—1, which should also be unit lower triangular. Finally check that, as desired, PA = LU. You can use a mix of hand calculations and matlab calculations, as you find convenient. And you can double check that you got the right final answer with the matlab command [L,U,P]=lu(A). For the remainder of the homework, see next page A Truss Problem A typical task in structural engineering is to design a bridge to be strong enough to withstand a certain load. Consider the following plane truss, which is a set of metal bars connected by frictionless pin joints. (“Plane” refers to the fact that the truss is two-dimensional, not three-dimensional as it would be in reality.) The symbol at the left end of the truss indicates that it is fixed at that end, while the symbol at the right end indicates that the truss is free to move horizontally, but not vertically. The three arrows pointing down represent loads on the truss. The problem is to solve a linear system of equations for the internal forces in the bars. A positive internal force indicates that the bar is being extended (pulled apart a little), by the load, while a negative internal force indicates that the bar is being compressed. It is assumed that, as long as the internal forces are not too big, bars will not be stretched or compressed more than a tiny amount: thus the structure does not collapse, but remains in equilibrium. By computing the internal forces, an engineer has more information as to whether the truss is indeed strong enough to withstand the load. There are two linear equations for each internal joint in the truss, repre- senting forces in the horizontal and vertical direction which must balance at the joints. Let us denote the internal forces by 1 , 2 , . . . , 13, correspond- ing to the numbers on the bars in the illustration. The balancing of forces at joint C in the horizontal direction gives the equation x4 = x8 while the balancing of forces at joint C in the vertical direction gives simply x7 = 0. The balancing of forces at joint B in the horizontal direction gives x2 = x6 while the vertical direction at joint B gives x3 = 10. The “10” comes from the 10 ton vertical load at joint B. The balancing of forces at joint A is a little more complicated, since it involves two bars oriented at an angle of 45 degrees as well as a horizontal and a vertical bar. Let Q = cos(π/4) = sin(π/4) = √2/2. Then the balancing of horizontal forces at joint A gives the equation αx1 = x4 + αx5 and the balancing of vertical forces at joint A gives αx1 + x3 + αx5 = 0. There are also horizontal and vertical force equations at joints D, E and F which can be derived using the same ideas. These amount to 12 equations altogether. The 13th equation comes from the right end point G: since this end point is free to move horizontally, but not vertically, there is just one force equation, balancing the forces horizontally: x13 + αx12 = 0. Thus, we have a total of 13 linear equations defining the 13 internal force variables. 2. Derive and solve the 13 linear equations in 13 variables. Write the equations using matrix notation, as Ax = b, and enter the matrix A and right-hand side vector b in a matlab function. You should start by allocating the space for A like this: A=zeros(13,13). Or- der the variables corresponding to bars 1, 2, 3, . . . and the equations corresponding to nodes A, B, C, . . . . Then solve the system of linear equations using the matlab backslash operator: x = Ab. You can put this in the function too, and return x, the vector of internal forces, as an output parameter of the function. Print the solution vector x (in any format). Mark the bars on a copy of the truss picture given above indicating which internal force is an extension force (xj > 0) and which is a compression force (xj < 0). Leave the bar unmarked if the internal force is zero. It’s fine to do this by hand, although you can write a program to do this if you want. 3. Generalization: Write a new function that sets up and solves the equations for a variable-sized truss, with k sections exactly like the section ABCDEF instead of one, so that two neighboring sections share a vertical bar, with the EF bar from one section identical to the AB bar for the next section, where k is an input parameter to the function. Note that this means 4 new nodes and 8 new bars for each increase of k by 1. This will require some careful thought. Start by sketching the larger truss on paper and carefully writing down the relevant equations systematically. Use orderings for the variables (i.e., the bars) and the equations (i.e., the nodes) that are consistent with the case k = 1. Include plenty of comments explaining the code. Don’t forget to start by allocating the space for A by A=zeros(n,n), where n depends on k. Make sure you recover the previous solution when k = 1, and then go on to test larger values of k, with a load vector that increases like this from left to right: 10, 20, 30, 40, 50, . . . . Print the computed internal force vector x for k = 10. There is no need to plot the truss. 4. Sparsity: The bandwidth of A (see AG, section 5.1) is defined as p+q - 1 where p and q are the smallest nonnegative integers such that ai,j = 0 if i ≥ j + p or j ≥ i + q. You can visualize this using spy to display the nonzero entries in the matrix A, say, for k = 10. What is the bandwidth of this matrix A? Would it be significantly di erent if you chose di erent orderings, say numbering all the top horizontal bars first, then the vertical and diag- onal bars, and then the bottom horizontal bars? You can answer this either by a clear explanation, or changing the order and displaying the results with spy. Also give the bandwidth for A—1 (the inverse of A, computed by inv(A)) and for the L and U factors obtained from [L,U] = lu(A). Remember that when you request only two output arguments from lu, the first, L, is not actually lower triangular if pivoting (row interchanges) takes place; it is permuted lower triangu- lar (or “psychologically” lower triangular). Did pivoting occur? How sparse are A—1 , L and U, compared with the sparsity of A? Answer this carefully, comparing the four diferent spy plots. You don’t need to use matlab’s sparse for this question, although you can if you want (the spy figures should be the same). 5. Timing Comparsions: Execution of matlab code can be timed using either timeit or tic . . .toc (see here for details). Experiment with how long it takes to solve the system of equations for k that is large enough that timing comparisons are meaningful. Compare the following: • x=Ab • getting x by first computing A—1 and multiplying it onto b • getting x by first computing the L and U factors with lu and then solving the two triangular systems Ly = b and Ux = y using . (This is what is actually going on when you type x=Ab, so the timing should be about the same.) • the same 3 again, but with A set up as a sparse matrix instead of the default “full” or dense matrix type; type help sparse for in- formation as to how this works. In particular, to initialize thema- trix to the sparse zero matrix, you need to write A=sparse(n,n) (if you use A=sparse(1,1) it should work but it will be slower because the data structure will change every time you assign a new matrix entry; do not use A=sparse(1) which will generate a sparse matrix with a11 = 1). And if you don’t initialize A at all, it will be full, not sparse, by default. This means editing your function that sets up A accordingly; add an input parameter that determines whether A is to be set up as a full (dense) or sparse matrix. How do the execution times compare? Does this relate to the sparsity displayed in the spy plots? Finally compare the timings for larger values of k. Do this just for solving with x=Ab, but for two cases: A is full (dense), and A is sparse. Avoid making k too large as then the full case may run out of memory. Plot both running times on one plot as a function of k using semilogy, legend, title, xlabel, ylabel. Then make a second plot of the running times for the sparse case only, which should allow you to make k much larger – but don’t make it so large that you have to wait more than a few minutes for the program to run. Submit: listings of the matlab functions, the marked truss picture re- quested in #2, the output x for the original truss and the generalized truss with k = 10, the plots generated by spy for k = 10, the derivation of the equations for both the original and the generalized truss, the results of the timing comparisons, including the plots of running times as a function of k , and answers to all the questions above.
Assignment Briefing (Level 5) Module Name Business Decision Modelling Module Code BB5112 Assignment Title Assignment 2 Type of Submission Online through Canvas Weighting of the assignment in the overall module grade 70% Word Count/Time allocation (for presentations) No limit Issue Date 3rd March 2025 Submission Date 17th April 2025 Date of Feedback to Students 17th May 2025 Where feedback can be found CANVAS Employability skills Professional Creative Thoughtful Resilient Proactive Literacy ✔ Communication ✔︎ Critical Thinking ✔︎ Relationship building ✔︎ Adaptability ✔︎ Numeracy ✔︎ Storytelling ✔︎ Critical Writing ✔︎ Networking ✔︎ Commercial Awareness ✔︎ Creativity ✔︎ Soft skills ✔︎ Presentation ✔︎ Problem Solving ✔︎ Teamwork ✔ Digital Skills ✔︎ Project Management ✔︎ How these skills are being developed in this assessment Through the development of forecasting models to examine both stationary and trended time series. The steps taken will be developed in workshops and will result in a mechanism to generate appropriate forecasts associated with supplied datasets accompanied by a comprehensive report. Workshops will involve peer discussion in development of the forecasting models but the submission must be an individual piece of work. Data Driven Decision Making/Business Decision Modelling. TB2 Assignment 2 Individual Report– Forecasting Consider the two time series data sets below in Tables 1 and 2 where n=24 in both series. These datasets are also available on the Assignment Two page on Canvas as dataset1.xlsx and dataset2.xlsx. You are required to do the following: TASK 1 (15 marks) 1. Conduct a diagnostic analysis on both datasets. From these diagnostics Identify which time series you think is stationary and which you think exhibits trend and seasonality. You should justify your conclusions with both visual and numerical evidence. TASK 2 (35 marks) 2. Using the dataset you feel is stationary carry out the following: a. Use the moving average (MA) approach to smooth the data using a moving average of period k = 2, 4, 6 and produce a forecast for period n+1 i.e. period 25. Determine which scheme appears to perform. best; b. Use Solver to produce a weighted moving average for k = 2, 4, 6 with weights optimised on both MAPE and RMSE to produce a forecast for period n+1. Which scheme appears to perform. the best? c. Compare your results obtained in a. and b. above, d. Conduct a similar exercise using exponential smoothing, initially with alpha = 0.2 and 0.8. Then use Solver to optimise the value of alpha based on both MAPE and RMSE to produce a forecast for period n+1. TASK 3 (35 marks) 3. Using the dataset you feel exhibits Trend and Seasonality use an additive decomposition model to: a. Extract a seasonal index for each quarter, b. Deseasonlise the data for each quarter, c. Produce a deseasonalised and seasonalised forecast for each period, d. Produce a deseasonalised and seasonalised forecast for periods n+1, n+2, n+3 and n+4, e. Plot the actual, deseasonalised and seasonalised data and forecasts on a single graph and comment on the results; TASK 4 (15 marks) 4. Consolidate your results in 1 – 3 above into a short report, which should include a critical evaluation of the methods you have used and consideration of the potential impact on business strategy of effective use of the forecasting process. Instructions a) Upload two spreadsheets with the solutions for each dataset, b) Upload a Word document containing your report, c) The piece of work is individual, d) The submission date for this assignment is by 23.59 on 17th April 2025.
Physical Geography: The Dynamic Earth EU/SC/GEOG1402 3.0 Faculty of Environmental and Urban Change Winter 2025 LAB EXERCISE # 2 Plate Tectonics and Geohazards INTRODUCTION Depending on your location, the ever-present movement of the Earth's lithospheric plates may vary in its effect on your activities. In some areas, such as the heart of the large continental plates, seismic activity and the associated potential geohazards maybe very far from the mind. A person could go their entire life without feeling a single tremor of the ground shaking. In other locations, the risk of disturbance may be ever-present. In this assignment, we will consider the amazing variability in the conditions associated with the movement between different lithospheric plates on our planet and the associated risk of geohazard in some locations where plates interact. As we discussed in lectures, the surface of the Earth is covered by several different plates of varying sizes. These plates include the crust and lithospheric mantle, moving along the asthenosphere's top. There are seven (7) major and seven (7) minor plates and smaller fragments of others (Figure 1). The plates interact with each other in different ways. In some locations, they come together (convergent boundary); in others, they move apart (divergent boundary); and in others, they slide in parallel (transform boundary). Figure 1: A map showing the Earth 's tectonic plates and the approximate rates and directions of plate motions. (https://opentextbc.ca/physicalgeology2ed/chapter/10-4-plate-plate-motions-and-plate-boundary-processes/#fig10.4.1) The location where different plates interact represents the most common locations for significant geohazards on Earth in the form of earthquakes and volcanoes. This is not to say that these disturbances cannot occur in areas more central to plates. Earthquakes commonly happen along fault lines in the interior of our continental lithospheric plates, and we discussed the nature of shield volcanoes occurring over hot spots. However, the geologically active boundaries of the moving lithospheric plates are the location of most earthquakes (Figure 2) and volcanoes, including those most catastrophic to human endeavours. Figure 2: Location of Earthquakes from 2000-2008 and their depth. Image: USGS. Part A: The Ocean Cross-Section and Features Figure 3 below is a contour map of the ocean floor between North America and Europe. Figure 3: Contour map of the North Atlantic Ocean. This map is available as a standalone pdf on eClass. 1. Drawing the ocean cross-section. a. Draw a cross-section of the ocean along the track of the line in Figure 3 from the X on the north coast of Newfoundland to the X on the west coast of England. Remember to label the two axes carefully. (12 marks) b. Calculate the vertical exaggeration of the cross section you have drawn using the following equation (2 marks): 2. Label the following features or processes on your cross-section (5 marks total): a. Mid-oceanic ridge b. Continental shelves c. Deep ocean basins / abyssal plains d. Indicate where spreading (divergence) is occurring e. Indicate the pattern of convection of the mantle beneath your cross-section 3. Name the tectonic plates on the West and East of the cross-section. (1 mark) 4. What plates are bordering the Australian plate? Describe the movement of the Australian plate to those around it. Use Figure 1 to complete this question. (3 marks) 5. Considering the Pacific plate, indicate which neighbouring plate(s) represent convergent, divergent, and transform. boundaries. Use Figure 1 to complete this question (3 marks) Part B: Catastrophic Earthquake Probability The subduction zone in the west of North America, where the oceanic crust of the small Juan de Fuca Plate and the continental crust of the North American plate are converging (called the Cascadia Subduction Zone or CSZ), is an area of intense geological activity. This includes the potential for significant geohazards, such as earthquakes and volcanoes. Nelson et al. (2006) used proxy records around the Pacific to infer the occurrence of large-scale tsunamis, which would be associated with catastrophic earthquakes along the CSZ. Catastrophic, in this case, refers to earthquakes with a magnitude greater than 9.0. Globally, these occur approximately every ten years. The last catastrophic earthquake in the CSZ occurred in 1700 CE. Using the inferred timing of these events (Table 1), you will calculate the probability and recurrence interval of catastrophic earthquakes in the CSZ. This part of the assignment is modified from Introduction to Hazard and Risk by T. Juster (2012). Table 1: The inferred dates of the last 9 mega earthquakes to impact the CSZ. From Nelson et al. 2006. 1700 CE 1110 CE 660 CE 490 BCE 890 BCE 1390 BCE 1790 BCE 2390 BCE 2890 BCE 1. For each of the nine megaearthquakes, calculate the years since they occurred. Don't forget to account for the fact that some dates are "CE" and others "BCE". Calculate the probability of a mega earthquake occurring over the period covered by these inferred data using the formula, and report that in a sentence (3 marks): 2. Convert this value to a recurrence interval and report in a sentence (1 mark): 3. Comment on the advantages of using recurrence interval instead of probability for this type of hazard assessment. (3 marks) Part C: Hotspot Volcanoes and Plate Movement Before the availability of highly accurate instrumentation or satellite measurements, much of what we knew about plate motion and velocity was derived from studying volcanic island chains associated with hotspots. As the plate moves over the hotspot like a gargantuan "conveyor belt" new volcanic islands are born above the hotspot. The continued movement results in volcanoes becoming extinct, and subsequent weathering and erosion reduce their size overtime. Eventually, these former volcanoes/islands may slip below the waves and exist as seamounts. The Hawaiian islands are the best-known example of this phenomenon. The history of hotspot volcanoes over this hotspot stretches into the north Pacific Ocean, as the Hawaiian-Emperor Seamount Chain (Figure 4). This part of the assignment is modified from Hotspot Theory and Plate Velocities by J. Russell (2008). Figure 4: Bathymetric map of the north Pacific, with the Hawaiian-Emperor Seamount Chain visible. Researchers have used radiometric dating using Potassium-Argon (K-Ar) techniques to date the approximate age of volcanoes (currently on islands in the Hawaiian Archipelago or as seamounts). A selection of these are listed in Table 2. 1. Plot an X-Y scatterplot of the data with distance on the X (horizontal) axis and age on the Y (vertical) axis. This plot can be made in a spreadsheet program (e.g. Microsoft Excel) or by hand. Include this plot in your report, including a descriptive caption. (4 marks) 2. Draw a straight trendline that best fits the relationship between these two variables. Determine the slope of the line and indicate that in a sentence in the units of the original chart (Ma/km). If determining the slope by hand, you can calculate the change in y-values between any two points and the change in x-values between any two points and calculate the slope as the (2 marks) 3. Convert the slope of the line into units of years/cm. Note: a Ma is 1,000,000 years, and a km is 100,000 cm. Report your answer in a sentence. (1 mark) 4. Calculate the velocity of the Pacific Plate movement in cm/year and report that in a sentence. (2 marks) Table 2: Selected ages of Hawaiian Islands (in millions of years old; Ma) and their distance to Kilauea (km). References: Nelson, A.R., Kelsey, H.M., and Witter, R.C., 2006, Great earthquakes of variable magnitude at the Cascadia subduction zone, Quaternary Research 65(3), 354-365. What Else Do I Need to Know? • This assignment is weighted 15%. • You will work on this assignment individually; no group submissions will be accepted for any reason. • Answers to the assignment questions are preferred when typed. Hand-drawn figures should be scanned or photographed and inserted into your Word document. Do well to ensure all aspects of the image are included and legible. • Your assignments will be saved as an MS Word or PDF document and uploaded to eClass. • All policies related to late assignments can be viewed in the course syllabus.
M2 Coding Challenge: Recursion Computer Science 120—Assignment Instructions: For each of your chosen programs: • Use pseudocode to plan out the program. • Write the code for the program and use whitespace to make it easier to read. • Run the program and check for errors. • List any corrections you made to your original plan at the end of the program using pseudocode. • Cite where and how you used generative AI to assist you as a comment at the end of your program. • Save the program using an appropriate name. Here are some options for how you may use generative AI to assist you with this assignment: Þ After you have done your own work, compare it to an AI generated solution by asking it to break down the same problem using computational thinking. o Evaluate the work of the AI for its approach, style, efficiency, and effectiveness. What did it do differently and what advantages/disadvantages are there to solving the problem in this way? Þ If you are getting errors you cannot troubleshoot on your own, ask an AI to identify these issues for you. Always cite where and how you used generative AI as a comment at the end of your program. Only use AI generated code that you fully understand and can confidently modify for your own needs. Complete BOTH of the following programs: Program 1: Tribonacci Tribonacci numbers are a sequence of numbers where each number is the sum of the three preceding ones. Write a program to recursively calculate a given Tribonacci number. Example output: Program 2: Power Function Write a program to recursively calculate the power of a given base raised to an exponent. Example output: Upload your two Python code files in ManageBac
Macro Theory II: Bonus Assignment (not mandatory): Case Studies on Economic Growth Miracles and Disasters 1 Objective The goal of this assignment is to analyze the factors that contribute to excep-tional economic growth and the reasons behind economic stagnation or collapse, with a focus on developments from the 20th century onward in specific regions such as East Asia and Latin America. Students will study two cases: one of a growth miracle where a country achieved rapid and sustained economic growth, and one of a growth disaster where economic policies led to stagna-tion or decline. Maximum possible points: 15. These points will be allocated be-tween your midterm and final exam at my discretion. Individual submissions are not permitted. You may work in groups of up to six members, separate from your existing groups for mandatory assign-ments. 2 Case Study 1: South Korea – A Growth Mir-acle 2.1 Background South Korea was one of the poorest countries in the world after the Korean War (1950-1953), with a per capita GDP lower than many African nations at the time. External factors such as global trade, financial crises, and geopolitical influences also played a role in shaping its economic trajectory. However, over the last six decades, South Korea has transformed into one of the world’s leading economies. 2.2 Key Factors in Growth • Government-Led Industrial Policy: The government promoted key industries such as electronics, shipbuilding, and automobiles. • Export-Oriented Strategy: Focus on high-quality goods for interna-tional markets rather than domestic consumption. • Investment in Education: Heavy investment in education and work-force skills increased productivity. • Chaebol System: Large conglomerates like Samsung, Hyundai, and LG were supported by government policies and access to credit. • Macroeconomic Stability: Stable inflation and fiscal policies provided a foundation for sustained growth. • Technological Advancement: Transition from labor-intensive manu-facturing to high-tech industries, making South Korea a leader in innova-tion. 2.3 Results • GDP per capita increased from $158 in 1960 to over $30,000 in 2020. • South Korea became the 10th largest economy in the world. • High levels of infrastructure development and urbanization. • Strong focus on research and development (R&D), making it one of the most innovative economies. 2.4 Discussion Questions 1. What lessons can other developing countries learn from South Korea’s growth experience? 2. What risks did South Korea face in its rapid growth, and how did it manage them? 3. How important was government intervention in shaping South Korea’s economic success? 3 Case Study 2: Argentina – A Growth Disaster 3.1 Background At the beginning of the 20th century, Argentina was one of the richest countries in the world, with a per capita GDP comparable to that of countries like France and Germany. However, it failed to sustain its economic dominance and has experienced repeated economic crises. 3.2 Key Factors in Economic Decline • Import Substitution Industrialization (ISI): Protectionist policies discouraged trade and led to inefficiencies in domestic industries. • Political Instability: Frequent coups and changes in government policies created uncertainty. • Excessive Government Spending and Debt: Heavy borrowing led to unsustainable public debt. • Hyperinflation: Poor monetary policies resulted in inflation rates ex-ceeding 3,000% in the late 1980s. • Overreliance on Commodities: Dependence on agricultural exports made Argentina vulnerable to global commodity price fluctuations. • Failure to Reform.: Unlike South Korea, Argentina did not transition to a knowledge-based economy and struggled with corruption. 3.3 Results • Frequent debt defaults and IMF bailouts. • Economic stagnation and income inequality. • Brain drain: Many skilled professionals emigrated due to instability. • Inflation remains a chronic problem, with recent rates exceeding 100% in 2023. 3.4 Discussion Questions 1. What key turning points or policy missteps led Argentina, despite its early economic advantage, to fail in sustaining long-term growth? 2. What role did government policies play in Argentina’s economic downfall? 3. How could Argentina break free from its cycle of crises and achieve sus-tained growth? 4 Assignment Instructions • Write a comparative analysis of these two case studies, focusing on policy decisions, economic structure, and institutional factors. • Use at least three academic or credible sources to support your arguments. • Format: 5-7 pages, double-spaced, Times New Roman, 12pt font. • Due Date: [March 9, 2025]. 5 Assessment Criteria • Depth of analysis (30%) • Use of data and evidence (25%) • Clarity and coherence of argument (20%) • Engagement with discussion questions (15%) • Proper citation of sources (10%) 6 Conclusion This assignment will help students understand how economic policies, institu-tions, and historical contexts shape long-term economic outcomes. By compar-ing South Korea’s remarkable rise to Argentina’s struggles, students will gain insight into the key determinants of economic success and failure.
CIS 481: Parallel & Distributed Software Systems Homework #1 Problem Set 1 Due: 11:59PM, Tuesday, February 11, 2025 Note: There are 5 problems with 10 points maximum. All homework must be edited using a word processor, and submitted via myCourses. Please include a cover page with your full names (group info), course title, problem set number, and your submission date. You will automatically lose 1 point if the cover page is missing. Problem 1-1. Parallel Computations (1+0.5point) Andrews, Exercise 1.13 (a) and (c) only, page 37. Note: Develop the pseudo code like the worker processes demonstrated on page 16 of your textbook. Problem 1-2. Atomic Actions and Histories (2+0.5 points) Andrews, Exercise 2. 10 (a) and (b), page 84. Andrews, Exercise 2. 12 (a) only, page 84. Note: (1) For Exercise 2.10 (b), the assignment y = y – x is implemented by 4 atomic actions. (2) For both exercises 2.10 & 2.12, you should give at least one scenario for each set of final values of x andy. Problem 1-3. At-Most-Once Property (2 points) Andrews, Exercise 2.14 (a) and (b), page 85. Correction: Question (a) should ask “Do the assignments in the co-statement meet the requirements of the At-Most-Once Property (2.2)? Explain.” Problem 1-4. Await Statement (1.5+1 points) Andrews, Exercise 2.13 (a), (b), and (c), page 85. Andrews, Exercise 2.18, page 86. Note: A scheduling policy is weakly fair, if every await statement that is eligible to be executed next will be executed eventually, assuming that its condition once becomes true and then remains true. Problem 1-5. Programming Assignment (1.5 points) Implement your parallel algorithm for Andrews, Exercise 1.13 (a) in Java. Use threads to simulate multiple worker processes for parallel computation. Sample code for using multiple threads in Java can be downloaded from: http://www.cis.umassd.edu/~hxu/courses/cis481/s25/ProblemSet/PS1/SimpleTask.java.txt General Remarks: 1. Always provide your reasoning/explanations, and not only your final answers. 2. You may discuss assigned problems with your classmates, but you must individually write your own solutions/code for all assignments. 3. Assignments are to be submitted via myCourses by the due date.
Mathematics 5 Analytic Number Theory Spring 2025 Assignment 3 Please hand in by 12 noon on Friday, 14 March 1. Let χ be a Dirichlet character mod q and consider its theta function Note that χ(−1)2 = χ(1) = 1 and so χ(−1) ∈ {−1, 1}. We say χ is even if χ(−1) = 1. We say χ is odd if χ(−1) = −1. (a) Show that if χ is odd, then ϑ(t; χ) ≡ 0 and if χ is even, then (b) If χ is any Dirichlet character mod q, show that Hint: You may find the elementary inequaltiy ex − 1 ≥ x for all x > 0 useful. 2. Let χ be a an even Dirichlet character mod q. Recall the L function defined by χ is given by Show that for every n ≥ 1, Sum over n ≥ 1 to conclude that for Re z > 1, Hint: To justify interchanging the sum and integral, you can use the following analysis result : if where then For the remaining part of the assignment, we will assume that χ is an even Dirichlet character mod q whose theta function satisfies the following functional equation: where 3. Split the integral in (†) so that for Re z > 1, Use the functional equation (1) so show that for Re z > 1, Show that the sum of the integrals on the right defines an analytic function on the whole complex plane C. This gives the analytic continuation of L(z; χ). 4. Suppose that χ is an even Dirichlet character mod q where q is a prime. In this case, the sum cχ,1 in the red box satisfies (you may assume this). Apply the previous question to the even character χ to show that This is the functional equation for L(z; χ) when χ is an even character.
MIE 1622H: Assignment 2 – Risk-Based and Robust Portfolio Selection Strategies February 10, 2025 Due: Friday, March 7, 2025, not later than 11:59p.m. Use Python for all MIE1622H assignments. You should hand in: • Your report (pdf file and docx file). Maximum page limit is 8 pages. • Compress all of your Python code (i.e., ipynb notebook with plots and necessary outputs intact) into a zip file and submit it via Quercus portal no later than 11:59p.m. on March 7. Where to hand in: Online via Quercus portal, both your code and report. Introduction The purpose of this assignment is to compare computational investment strategies based on select- ing portfolio with equal risk contributions, using robust mean-variance optimization and applying passive investment strategy via benchmark tracking optimization. In Assignment 1, you have al- ready implemented and compared computational investment strategies based on minimizing port- folio variance, maximizing portfolio expected return, maximizing Sharpe ratio as well as “equally weighted” and “buy and hold” portfolio strategies. You are proposed to test nine strategies (use your implementation of strategies 1-5 from Assign- ment 1): 1. “Buy and hold” strategy; 2. “Equally weighted” (also known as “1/n”) portfolio strategy; 3. “Minimum variance” portfolio strategy; 4. “Maximum expected return” portfolio strategy; 5. “Maximum Sharpe ratio” portfolio strategy; 6. “Equal risk contributions” portfolio strategy; 7. “Leveraged max Sharpe Ratio” portfolio strategy; 8. “Robust mean-variance optimization” portfolio strategy. 9. “Benchmark tracking optimization” portfolio strategy. If a portfolio strategy relies on some targets, e.g., target return estimation error, you need to select those targets consistently for each holding period. It is up to you to decide how to select those targets. Feel free to use Python prototypes provided in class. Questions 1. (50 %) Implement investment strategies in Python: You need to test four portfolio re-balancing strategies: 1. “Equal risk contributions” portfolio strategy: compute a portfolio that has equal risk contributions to standard deviation for each period and re-balance accordingly. You can use IPOPT example from the lecture notes, but you need to compute the gradi- ent of the objective function yourself (to validate your gradient computations you may use finite differences method). The strategy should be implemented in the function strat_equal_risk_contr. 2. “Leveraged max Sharpe Ratio” portfolio strategy: take long 200% position in max Sharpe ratio portfolio and short risk-free asset for each period andre-balance accordingly, implement it in the function strat_lever_max_Sharpe. You do not have to include risk- free asset as an additional asset when calculating optimal positions. Just make sure that you subtract amount borrowed (including interest at the risk-free rate) when calculating portfolio value in a similar way as you do for transaction cost at each time period. 3. “Robust mean-variance optimization” portfolio strategy: compute a robust mean-variance portfolio for each period and re-balance accordingly. You can use CPLEX example from the lecture notes, but you need to select target risk estimation error and target return according to your preferences as an investor (provide justification of your choices). The strategy should be implemented in the function strat_robust_optim. 4. “Benchmark tracking” portfolio strategy: for each period, select a portfolio that aims to replicate or closely track the provided benchmark portfolio (S&P30). The S&P30 benchmark (index) portfolio is composed of the same 30 stocks used in this assignment, with each stock’s weight determined by its market capitalization relative to the total market cap of the 30 stocks. Asset weights in the benchmark portfolio wb are provided in the template code. Your optimization formulation should minimize tracking error (squared), which is defined as the difference between your portfolio’s variance of re- turns and the benchmark’s variance of returns, subject to standard portfolio constraints plus a cardinality constraint. The cardinality constraint should allow selection of at most ten stocks for your benchmark-tracking strategy. Implement this in the function strat_tracking_index. Design and implement a rounding procedure, so that you always trade (buy or sell) an integer number of shares. Design and implement a validation procedure in your code to test that each of your strategies is feasible (you have enough budget tore-balance portfolio, you correctly compute transaction costs, funds in your cash account are non-negative). There is a file portf optim2.ipynb on the course web-page. You are required to complete the code in the file and run your script for 3 CSV datasets explained below. Your Python code should use only CPLEX and IPOPT optimization solvers. For this assignment you need to use Google Colab and install the trial version of CPLEX (available on Colab) with !pip install cplex. To install IPOPT solver on Google Colab, please install necessary libraries as shown in ipopt_example .ipynb file from Lecture 1, and after that install cyipopt module with !pip install cyipopt. 2. (15 %) Analyze your results: • Produce the following output for the 12 periods (years 2020 and 2021): Period 1: start date 01/02/2020, end date 02/28/2020 Strategy "Buy and Hold", value begin = $ 1000016 .96, value end = $ 887595 .87, cash account = $0 .00 Strategy "Equally Weighted Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Minimum Variance Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Maximum Expected Return Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Maximum Sharpe Ratio Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Equal Risk Contributions Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Leveraged Max Sharpe Ratio Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Robust Optimization Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Benchmark Tracking Portfolio", value begin = . . . , value end = . . . , cash account = . . . ... Period 12: start date 11/01/2021, end date 12/31/2021 Strategy "Buy and Hold", value begin = $ 964589 .81, value end = $ 942602 .39, cash account = $0 .00 Strategy "Equally Weighted Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Minimum Variance Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Maximum Expected Return Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Maximum Sharpe Ratio Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Equal Risk Contributions Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Leveraged Max Sharpe Ratio Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Robust Optimization Portfolio", value begin = . . . , value end = . . . , cash account = . . . Strategy "Benchmark Tracking Portfolio", value begin = . . . , value end = . . . , cash account = . . . • Plot one chart in Python that illustrates the daily value of your portfolio (for each of the nine trading strategies) over the years 2020 and 2021 using daily prices provided. Include the chart in your report. • Plot one chart in Python that illustrates maximum drawdown of your portfolio (for each of the nine trading strategies) for each of the 12 periods (years 2020 and 2021) using daily prices provided. Include the chart in your report. • Plot one chart in Python to show dynamic changes in portfolio allocations under strat- egy 8. In each chart, x-axis represents the rolling up time horizon, y-axis denotes portfolio weights between 0 and 1, and distinct lines display the position of selected assets over time periods. Does your robust portfolio selection strategy reduce trading as compared with strategies 3, 4 and 5 that you have implemented in Assignment 1? • Compare your “equal risk contributions”, “leveraged max Sharpe ratio”, “robust mean- variance optimization” and “benchmark tracking” trading strategies between each other and to five strategies implemented in Assignment 1 and discuss their performance relative to each other. Which strategy would you select for managing your own portfolio and why? 3. (20 %) Test your trading strategies for years 2008 and 2009: • Download daily closing prices for the same thirty stocks for years 2008 and 2009 (file adjclose 2008 2009.csv). If necessary, transform data into a format that your code can read. • Test nine strategies that you have implemented for the 12 periods (years 2008 and 2009). Use the same initial portfolio that you are given in Assignment 1. Produce the same output for your strategies as in Question 2. • Plot one chart in Python that illustrates the daily value of your portfolio (for each of the nine trading strategies) over the years 2008 and 2009 using daily prices that you have downloaded. Include the chart in your report. • Plot one chart in Python that illustrates maximum drawdown of your portfolio (for each of the nine trading strategies) for each of the 12 periods (years 2008 and 2009) using daily prices provided. Include the chart in your report. • Plot four charts in Python for strategies 3, 4, 5 and 8 to show dynamic changes in portfolio allocations using the new data set. Does your robust portfolio selection strategy reduce trading as compared with the strategies 3, 4 and 5? • Compare and discuss relative performance of your nine trading strategies during 2020- 2021 and 2008-2009 time periods. Which strategy would you select for managing your own portfolio during 2008-2009 time period and why? 4. (15 %) Test your trading strategies for year 2022: • Download daily closing prices for the same thirty stocks for year 2022 (file adjclose 2022.csv). If necessary, transform data into a format that your code can read. • Test nine strategies that you have implemented for the 6 periods (year 2022). Use the same initial portfolio that you are given in Assignment 1. Produce the same output for your strategies as in Question 2. • Plot one chart in Python that illustrates the daily value of your portfolio (for each of the nine trading strategies) over the year 2022 using daily prices that you have downloaded. Include the chart in your report. • Plot one chart in Python that illustrates maximum drawdown of your portfolio (for each of the nine trading strategies) for each of the 6 periods (year 2022) using daily prices provided. Include the chart in your report. • Compare the maximum drawdown for 2022 to the 2008-2009 maximum drawdown? How do the two periods called “recessions” compare? Python Code to be Completed (available on Quercus) # Import libraries import pandas as pd import numpy as np import math from scipy import sparse import matplotlib.pyplot as plt %matplotlib inline # Install CPLEX and IPOPT, if necessary . import cplex import cyipopt as ipopt # Complete the following functions def strat_buy_and_hold(x_init, cash_init, mu, Q, cur_prices): x_optimal = x_init cash_optimal = cash_init return x_optimal, cash_optimal def strat_equally_weighted(x_init, cash_init, mu, Q, cur_prices): return x_optimal, cash_optimal def strat_min_variance(x_init, cash_init, mu, Q, cur_prices): return x_optimal, cash_optimal def strat_max_return(x_init, cash_init, mu, Q, cur_prices): return x_optimal, cash_optimal def strat_max_Sharpe(x_init, cash_init, mu, Q, cur_prices): return x_optimal, cash_optimal def strat_equal_risk_contr(x_init, cash_init, mu, Q, cur_prices): return x_optimal, cash_optimal def strat_lever_max_Sharpe(x_init, cash_init, mu, Q, cur_prices): return x_optimal, cash_optimal def strat_robust_optim(x_init, cash_init, mu, Q, cur_prices): return x_optimal, cash_optimal def strat_tracking_index(x_init, cash_init, mu, Q, cur_prices): return x_optimal, cash_optimal # Input file input_file_prices = ’adjclose_2020_2021 .csv’ # Read data into a dataframe. df = pd.read_csv(input_file_prices) # Convert dates into array [year month day] # Convert dates into array [year month day] def convert_date_to_array(datestr): temp = [int(x) for x in datestr.split(’/’)] return [temp[-1], temp[0], temp[1]] dates_array = np.array(list(df[’Date’] .apply(convert_date_to_array))) data_prices = df.iloc[:, 1:] .to_numpy() dates = np.array(df[’Date’]) # Find the number of trading days in Nov-Dec 2019 and # compute expected return and covariance matrix for period 1 day_ind_start0 = 0 day_ind_end0 = len(np .where(dates_array[:,0]==2019)[0]) # for 2020-2021 csv #day_ind_end0 = len(np .where(dates_array[:,0]==2007)[0]) # for 2008-2009 csv #day_ind_end0 = len(np .where(dates_array[:,0]==2021)[0]) # for 2022 csv cur_returns0 = data_prices[day_ind_start0+1:day_ind_end0,:] / data_prices[day_ind_start0:day_ind_end0-1,:] - 1 mu = np.mean(cur_returns0, axis = 0) Q = np.cov(cur_returns0.T) # Remove datapoints for year 2019 data_prices = data_prices[day_ind_end0:,:] dates_array = dates_array[day_ind_end0:,:] dates = dates[day_ind_end0:] # Initial positions in the portfolio init_positions = np .array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3447, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 19385, 0]) # Initial value of the portfolio init_value = np.dot(data_prices[0,:], init_positions) print(’ Initial portfolio value = $ {} ’ .format(round(init_value, 2))) # Initial portfolio weights w_init = (data_prices[0,:] * init_positions) / init_value # Number of periods, assets, trading days N_periods = 6*len(np .unique(dates_array[:,0])) # 6 periods per year N = len(df.columns)-1 N_days = len(dates) # Annual risk-free rate for years 2020-2021 is 1.5% r_rf = 0 .015 # Annual risk-free rate for years 2008-2009 is 4.5% r_rf2008_2009 = 0 .045 # Annual risk-free rate for year 2022 is 3.75% r_rf2022 = 0 .0375 # Weights of assets in the benchmark portfolio S&P30 for years 2020-2021 w_b = np .array([0 .14832533, 0 .15556291, 0.01990254, 0.05079846, 0.01302685, 0.01030985, 0.02252249, 0.00227124, . . .]) # Weights of assets in the benchmark portfolio S&P30 for years 2008-2009 w_b2008_2009 = np.array([0.04515391, 0.09628167, 0.00962156, 0.04751553, 0.02738386, 0.00612178, 0.03599232, . . .]) # Weights of assets in the benchmark portfolio S&P30 for year 2022 w_b2022 = np .array([0 .1994311, 0 .18518391, 0.05464191, 0.06021769, 0.00822679, 0.01771958, 0.01735487, . . .]) # Number of strategies strategy_functions = [’strat_buy_and_hold’, ’strat_equally_weighted’, ’strat_min_variance’, ’strat_max_return’, ’strat_max_Sharpe’, ’strat_equal_risk_contr’, ’strat_lever_max_Sharpe’, ’strat_robust_optim’, ’strat_tracking_index’] strategy_names = [’Buy and Hold’, ’Equally Weighted Portfolio’, ’Mininum Variance Portfolio’, ’Maximum Expected Return Portfolio’, ’Maximum Sharpe Ratio Portfolio’, ’Equal Risk Contributions Portfolio’, ’Leveraged Max Sharpe Ratio Portfolio’, ’Robust Optimization Portfolio’, ’Benchmark Tracking Portfolio’] N_strat = 1 # comment this in your code #N_strat = len(strategy_functions) # uncomment this in your code fh_array = [strat_buy_and_hold, strat_equally_weighted, strat_min_variance, strat_max_return, strat_max_Sharpe, strat_equal_risk_contr, strat_lever_max_Sharpe, strat_robust_optim, strat_tracking_index] portf_value = [0] * N_strat x = np.zeros((N_strat, N_periods), dtype=np.ndarray) cash = np.zeros((N_strat, N_periods), dtype=np.ndarray) for period in range(1, N_periods+1): # Compute current year and month, first and last day of the period # Depending on what data/csv (i .e time period) uncomment code if dates_array[0, 0] == 20: cur_year = 20 + math.floor(period/7) else: cur_year = 2020 + math.floor(period/7) # example for 2008-2009 data #if dates_array[0, 0] == 8: # cur_year = 8 + math.floor(period/7) #else: # cur_year = 2008 + math.floor(period/7) cur_month = 2*((period-1)%6) + 1 day_ind_start = min([i for i, val in enumerate((dates_array[:,0] == cur_year) & (dates_array[:,1] == cur_month)) if val]) day_ind_end = max([i for i, val in enumerate((dates_array[:,0] == cur_year) & (dates_array[:,1] == cur_month+1)) if val]) print(’ Period {0}: start date {1}, end date {2}’ .format(period, dates[day_ind_start], dates[day_ind_end])) # Prices for the current day cur_prices = data_prices[day_ind_start,:] # Execute portfolio selection strategies for strategy in range(N_strat): # Get current portfolio positions if period == 1: curr_positions = init_positions curr_cash = 0 portf_value[strategy] = np.zeros((N_days, 1)) else: curr_positions = x[strategy, period-2] curr_cash = cash[strategy, period-2] # Compute strategy x[strategy, period-1], cash[strategy, period-1] = fh_array[strategy](curr_positions, curr_cash, mu, Q, cur_prices) # Verify that strategy is feasible (you have enough budget to re-balance portfolio) # Check that cash account is >= 0 # Check that we can buy new portfolio subject to transaction costs ###################### Insert your code here ############################ # Compute portfolio value p_values = np.dot(data_prices[day_ind_start:day_ind_end+1,:], x[strategy, period-1]) + cash[strategy, period-1] portf_value[strategy][day_ind_start:day_ind_end+1] = np.reshape(p_values, (p_values.size,1)) print(’ Strategy "{0}", value begin = $ {1: .2f}, value end = $ {2: .2f}, cash account = ${3: .2f}’ .format( strategy_names[strategy], portf_value[strategy][day_ind_start][0], portf_value[strategy][day_ind_end][0], cash[strategy, period-1])) # Compute expected returns and covariances for the next period cur_returns = data_prices[day_ind_start+1:day_ind_end+1,:] / data_prices[day_ind_start:day_ind_end,:] - 1 mu = np.mean(cur_returns, axis = 0) Q = np.cov(cur_returns.T) # Plot results ###################### Insert your code here ############################ Sample Python Function for Trading Strategy def strat_buy_and_hold(x_init, cash_init, mu, Q, cur_prices): x_optimal = x_init cash_optimal = cash_init return x_optimal, cash_optimal
Pharmacology and Pharmaceutical Marketing -- Final Exam March 4, 2025 (Each question is worth 10 points; be specific and brief) 1. Define the following pharmacological concepts and give a medical-marketing example of how each has been used in the pharmaceutical industry to drive product differentiation: efficacy, potency, specificity, selectivity, sensitivity. 2. What pharmacological concepts did the PDE-5 inhibitors (e.g., Levitra, Viagra, Cialis) use to differentiate themselves in a crowded, 3-way competitive market and how were they applied? 3. What are the key pharmacological concepts (eg, PD/PK/side effects) used to differentiate the long-acting basal insulin preparations (eg, Lantus vs. Levemir)? Describe how these concepts were applied. 4. Explain the physiological rationale for the development of COX-2 specific inhibitors (eg, what was the unmet need they were trying to address) and how VIOXX and Celebrex utilized this pathway to differentiate from each other at launch to gain market share. 5. Describe the pharmacological basis for the launch strategy used to differentiate teriflunomide (Aubagio) from other new agents in multiple sclerosis? Include MOA, PK, brain volume loss, tolerability and safety issues in your response. 6. What are the key pharmacological and clinical differences between the pneumococcal conjugate vaccines -- Prevenar 7, Synflorix and Prevenar 13? How were these differences used and what methods were employed to drive product differentiation worldwide in invasive pneumococcal disease? 7. What were the key differences between the HPV vaccines Cervarix and Gardasil and how were they used to differentiate themselves in the introduction of these vaccines? What were some barriers to HPV vaccine uptake? 8. What was the basis for Vesicare’s claim that it was a superior anticholinergic agent for overactive bladder (OAB) compared to the other agents in this crowded area? Do you feel it was a credible argument? 9. How did Nexium build its case that it wasn’t the same as Prilosec in the treatment of GERD & erosive esophagitis and the prevention of adenocarcinoma of the esophagus and was worth a premium (eg, non-generic) price? What did this example teach you about medical-marketing strategy? 10. Any suggestions to improve the course for next time? Also, please post a review of the course on www.ratemyprofessors.com so future students can better determine if this course is suitable for them.
ECE-242 Applied Feedback Controls Homework 7 Winter 2025 Background Initially, this is the point in the class where I would send you into the lab to learn how to control a real mechanical experiment (inverted pendulum, three disk set, caged fan, etc.). The problem here is that the hardware has gotten so good, that essentially these now all boil down to playing with Simulink and MATLAB. If I put you on bad hardware, then you spend most of your time debugging the bad hardware (something which I am sure you are all going to do later in your careers anyway). So, instead, I’ve got a simulink model for you to play with. This is a full, realistic control design exercise—it is open ended, the more you put in, the more you get out. Figure 1: Odyssey Spacecraft 0 Attitude Stabilization of a 1-D satellite Very often in satellite/space ship design, you wind up putting the big heavy things (engines, etc) on one end, and the sensitive sensors on the other end, and because you don’t have to support the weight (microgravity environment), you use a slender flexible truss to move them apart. We model the torsion stages as three inertias connected by two torsional springs. To make matters more difficult, the thrusters that can torque the satellite are connected to the aft stage, not to the front where the sensors are. You do get a star tracker on each stage which gives you very high quality rotational attitude information on the fore and aft stages (but not in the middle). 1 Collocated Control The rotational dynamics of the entire satellite system, aft thrusters to aft angle, can be represented by the following transfer function (this is the easy one): where: Now, having done our previous simple designs, let’s explore the design space a bit: a) Go back to your design D1 (s), from HW 6(a), and re-implement it discretely (with Tustin) using slower and slower sample rates. Remember, D1 (z) will change every time you change the sample rate. How slow can you go before performance breaks down? b) Simulate your designs using the simulink model DiscoveryAftDiscrete.mdl from the CANVAS website, and overplot the step responses against your original continuous time response. Use either stair or stem command to plot the digital response. c) Explain why the performance degrades as the sample time gets slower. Be specific, calculate the actual phase loss. d) Compute the ZOH equivalent of the transfer function, Gaft (s), and try to design D1 (z) directly in the z-domain. Compare the performance against the D(z)’s that you got above in part (a). You should be able to achieve good performance at much lower sample rates. 2 Non-Collocated Control When you did the control above, the output sensor and the actuator were in the same place. This is called collocated control. You should have observed that the high frequency poles tended to have zeros close by that caused the poles to move in favorable (stabilizing) directions when you cranked up the gain. This is typically the case for collocated control. What happens when the sensor and the actuator are not in the same place? This is non-collocated control, and it gets harder. The rotational dynamics of the entire satellite system, aft thrusters to fore angle, can be represented by the following transfer function (this is the hard one): where: Again, revisiting the controller design that we did in HW 6: a) Design a new compensator, D3 (s), that uses a lead and notch for the non-collocated system. The notch zeros should be “near” the poles of the first resonant mode of the uncompensated system (i.e.: near ωp1), but do not cancel them. Experiment (in MATLAB) with where to put them, remembering that the exact location of these poles are unknown. Try to be robust. Also, experiment with where to place the poles of the notch, and see how that affects the system. See if you can match the performance of the system in 6− 1(a), i.e.: closed-loop bandwidth of 2π and a phase margin of 55。 (you might not be able to make it). b) Use DiscoveryFore.mdl to simulate the system with your new controller. c) Once you are happy with your design for D3 (s), go ahead and digitize it using Tustin for a range of sample rates. Again, see how slow you can go and still get decent performance. Simulate using DiscoveryForeDiscrete.mdl, and show your results (clearly and well reasoned, please). d) Redesign a new compensator, D4 (s), using the linear phase offset to compensate for the sample delay, and see how much better you do than in part (c). e) Optional: Convert Gfore (s) into Gfore (z) using a ZOH transformation (use MATLAB), and redesign in the z-domain. See what kind of performance you can achieve in terms of bandwidth and phase margin, along with how slow you can sample. Note: use anything that MATLAB has to offer here, and use this “lab” as an opportunity to stretch yourself in terms of understanding the material. While this is still a fairly canned exercise, it is much closer to the real world than anything else you have done. See how robust your designs are, what happens if you are off on your pole locations (if you really want to be adventurous, do a root locus based on a scaling of the pole locations, and see what happens). Again, the more you put into this, the more you will get out of it. Don’t fail your other classes, but definitely use this opportunity to learn as much as you can. Review Take the time to play with this system, and see what you can get it to do. You are free to design more complex controllers, and get a feel for controlling a more difficult system.
Coursework EEE109 February 25, 2025 1 Question 1 (4 Marks) Consider the Zener Diode circuit shown in Figure 1. Consider the following parameters: VI = 20V, VZ = 10V, Ri = 220Ω and PZ (max) = 440mW.(It is recommended that the calculation process be made to one decimal place.) (a) If the load resistor is RL = 380Ω . Please calculate the load current IL , zener diode current IZ , and input current II (2 marks) (b) Determine the value of RL that will establish PZ (max) (2 marks) Figure 1: zener diode circuit 2 Question 2 (10 Marks) The input voltage source is a square wave, as shown in Figure 2. Figure 2: input voltage source (a) Plot the waveform of output voltage Uo in circuit shown in Figure 3. Assume the turn-on voltage of the diode is Uγ = 0.6V. Please mark the maximum and minimum values on your figure. (5 marks) (b) Plot the waveform of output voltage Uo in circuit shown in Figure 4. Assume the turn-on voltage of the diode is Uγ = 0.6V. Please mark the maximum and minimum values on your figure. (5 marks) Figure 3: circuit I Figure 4: circuit II 3 Question 3 (10 Marks) The two diode circuit is shown in Figure 5. Calculate the output voltage vo and the diode current ID1 and ID2 for the following voltage conditions: Vγ = 0.6V and rf = 0Ω Figure 5: two diodes circuit (a) v1 = 10V and v2 = 0V (5 marks) (b) v1 = v2 = 0V (5 marks) 4 Question 4 (10 Marks) A full wave rectifier circuit with battery charging is shown in Figure 6. Assume VB = 9V,Vγ = 0.7V and vs = 15sin[2π (60)t](V).Assume rf=0Ω Figure 6: full wave rectifier circuit with battery charging (a) Determine the resistance of R such that the peak battery charging current is 1.2A (2 marks) (b) Determine the average battery charging current (use the resistance value in (a)) (4 marks) (c) Determine the fraction of time that each diode is conducting (4 marks) 5 Question 5 (6 Marks) An NPN transistor with β = 80 is connected in a common-base configuration, as shown in Figure 7. The emitter is driven by a constant-current source with IE = 1.2mA. Determine the value of IB , IC , α and VC Figure 7: npn with common base configuration 6 Question 6 (10 Marks) Consider the circuit shown in Figure 8, VEB (on) = 0.7V. Use the Thevenin Equivalent Circuit to solve the following questions. Figure 8: BJT circuit (a) Please determine the value of RTH , VTH , IBQ , ICQ , and VECQ for β = 90 (5 marks) (b) Determine the percent change in ICQ and VECQ if for β is changed to for β = 150 (5 marks) 7 Question 7 (14 Marks) For the common-gate circuit in Figure Q7, theNMOS transistor parameters are: VTN = 1 V, kn = 3 mA/V2 , and λ = 0. Assume RS = 10 kΩ, RD = 5 kΩ and RL = 4 kΩ . Capacitors can be treated as short circuits in ac analysis. Figure Q7 (1) Determine the Q-point of the transistor, values of IDQ and VDSQ. (6 marks) (2) Draw the small-signal equivalent circuit. (4 marks) (3) Find the small-signal voltage gain Av. (4 marks) 8 Question 8 (16 Marks) Consider the circuit in Figure Q8, the transistor parameters are β = 100, VBE(on) = 0.7 V, VT = 0.026 V and VA = 100 V. The circuit parameters are R1 = 27kΩ, R2 = 15kΩ, Rc = 2.2kΩ, RE = 1.2kΩ, Rs = 10kΩ, RL = 2kΩ, and Vcc = 9V. Capacitors can be treated as short circuits in ac analysis. Figure Q8 (1) Find the Q-point of the transistor in dc analysis. (5 marks) (2) Draw the small signal equivalent circuit and determine the input resistance. (8 marks) (3) Find the small signal voltage gain Av. (3 marks) 9 Question 9 (20 Marks) You are required to design a MOSFET amplifier circuit for a telephone circuit with amidband frequency range of 300 Hz to 2 kHz. The desired magnitude of the midband voltage gain is 15. Assume: VTN = 1V, IDQ = 0.2 mA, λ = 0 , VDSQ = 5 V, Rsi = 0, R1 + R2 = 200 kΩ . Calculate the required values for the resistors R1, R2, RD and capacitors CC , CL , and the transistor parameter kn. Figure Q9
COMP1005 B Winter 2025 – “Introduction to Computer Science I” COMP1005 B Assignment #3 Functions, Strings, Files Overview In this assignment, you will demonstrate your understanding of (in addition to previous topics): • Reading and writing text files in Python • Performing string operations (like strip (), split ()) • Appending data to lists and checking if elements are in lists using in • Defining and using your own functions which accept parameters and return a value This assignment has 40 marks and 4 bonus marks available. Submission Requirements Submit a single zip file with the name comp1005__a3 .zip (comp1005 or comp1405 both acceptable) to Brightspace. This zip file should contain two Python files named assignment3 .py and a3_tester .py, both of which are initially provided to you on Brightpace in comp1005-a3-starter .zip, and any flowcharts completed for bonus marks. Standard Assignment Policies Late Submissions: Submissions can be accepted late without penalty until Sundays at 11:59PM, but later submissions will not be accepted. Support will not be provided during this late period. Technical issues near the deadline are not grounds for extension. You are expected to submit frequently throughout the time the submission is available and use the School of Computer Science computer labs if necessary. Incorrect submissions are not grounds for extension. It is your responsibility to download and review your submission prior to the submission cut-off period and verify that you have followed the submission requirements. Review the course outline for more information Academic Misconduct Submissions will be automatically compared to each other and then any suspiciously similar code will be manually reviewed. Any code that seems suspiciously similar will be forwarded to the Dean’s office for further investigation. You are not permitted to collaborate with other students by sharing rough work or code. You are not permitted to use chat or codegenerative AI to assist with creating rough work, written explanations, or code. In no way should you pass off the work of others as your own or assist others in doing the same. Review the course plagiarism policy for further information. Note: Use of official documentation or the course textbook and course notes are permitted. Simple citations, such as including links or references to the course textbook or official Python documentation are advised if any work is copied from these sources. Invalid Submissions Submissions containing incorrect packaging, naming, or file types will receive an approximately 10% penalty with no exceptions. To dispute the grade once grades release, you have seven days to contact the TA that marked you with information about what was marked incorrectly. Assignments 3 Specification | Due Fri, Mar. 07 @ 17:00 Job Posting (Assignment Overview) Note: This is not a real job posting but optional context for the assignment. Feel free to skip this if you would prefer to focus on the technical requirements. Job Title: Paranormal Instrument Software Designer Duties: Developing the software for ghost hunting equipment Requirements: Experience with lists, strings, reading and writing files, and defining functions Description: We’re the Raven Ghost Hunting Society (RGHS) and we’re looking for someone to help automate the analysis of our sensor readings. We go to abandoned houses and take sensor readings with various devices to try and detect ghosts! Our sensors output their data to files, and each sensor uses a slightly different file format. We need you to write a program that can read sensor data from our three different sensors, process that data, and output whether or not it’s abnormal. If a room has abnormal readings in more than one sensor, we’re DEFINITELY dealing with some kind of entity! We previously hired an intern to write some testing code, but they were so terrified of the thought of slightly cold temperatures and wifi signal interference that they ran off before finishing anything, so you have somewhere to get started. Good luck! Provided Code You should begin by downloading and extracting the files in comp1005-a3-starter .zip. It contains: • assignment3 .py: This is the starting point for writing your code. A few of the functions have been setup for you to get started,and they use type hints and are documented to get you started. • a3_tester .py: You do not need to read this code. It contains some testing functions which you (and the TAs) can use to test how your code is doing as you write it. As long as the test text files and the Python files are all in the same directory, you can run this Python program to see how things are doing. • ravensnest .* .txt: There are three “ravensnest” files which can be used for some simple testing of your program to see how the program works in context. This is what your assignment3 .py should test with. • test .* .txt: There are three “test” files which are used by the a3_tester .py program. They contain a bunch of tests, and locations specifically named after different situations where your code might fail. • report_test .* .txt: There are three “test” files which are used by the a3_tester .py program used to test correct generation of reports. Functions to Write A fully completed assignment3 .py should include the following functions, all with the correct naming, returning the correct return values, and accepting the correct parameters: 1. main () which returns nothing and handles the main control flow of your program. It should be quite simple and run your functions with the ravensnest location data. 2. read_motion (), read_emf (), read_temperature () will all open files based on the provided location name and return a list of “abnormal” rooms. The start of these has been provided. 3. get_unique__rooms () accepts three lists of strings containing the rooms that were considered “abnormal” by the previous three functions and returns a list of rooms that appeared in at least one of each, without duplicates. 4. generate_report () accepts a location name as a string and three lists of strings, the rooms that were considered “abnormal” by each sensor function, and returns nothing. It identifies which “ghosts” must have appeared in each room in the location and writes this report to a file. Bonuses Available Reminder: The maximum mark for the overall assignment portion (all assignments combined) of your grade will not exceed 100%, but bonus marks on each assignment can help bring up lower marks on other assignments. Flowcharts (2 marks) To begin, you should try to make a plan for how you will analyze each problem. You can receive 2 bonus marks for including a flowchart for each of read_motion (), read_emf (), and read_temperature () if you submit a flowchart that shows your plan for reading the data in each room and storing the appropriate list of rooms. The flowcharts do not need to be perfect; just sketch your control flow in a way the TA (and you!) can follow. Include these flowcharts as one or more PDF, PNG, JPG files in your zip submission. Read Ahead: Exception Handling (2 marks) When working with files, it’s very easy for things to go wrong; incorrect file names, not being allowed to open a file, or already having a file open - to name a few. Exceptions are signals that some situation cannot be easily handled. Chapter 13 of the textbook discusses exceptions using try..except. This tool lets us choose to run some code that could go wrong, and rather than crashing our program, continue it gracefully. In your main () function, allow the user to specify which location they want to investigate using input () rather than hardcoding the name of the filename. Then, in each of your read functions ( read_motion (), read_emf (), read_temperature ()), use try and except when opening the file so that if an invalid location is sent and the file cannot be opened, it will provide an appropriate error message and return an empty list of rooms instead of crashing. For marks, the TA must be able to input the name of valid and invalid locations, and for invalid locations, get a message telling the that the file cannot be found, and try/except must be used in those three functions. Problem 1-3 Overview Problems 1 - 3 are very similar in their structure. To avoid repetition, here are a few notes which apply to them all: 1. Each function will read data from a text file and return a list of rooms considered “abnormal”, 2. Each function will accept a location name as a string and open a file based on that location. For example, for the location “house” passed into the argument location_name: a. Motion data will be found in house .motion .txt b. EMF data will be found in house .emf .txt c. Temperature data will be found in house .temp.txt 3. A room should only appear at most once in the returned list 4. Each function will need to loop, reading data in line-by-line, until it reaches the end of the file. 5. Update your main () function to store the list returned by each of these when calling them with the argument "ravensnest" to get sample data. You can print it out to test if it worked. 6. Running a3_tester .py will run the test suite, with lots of different pieces of test data to give you an idea of how your functions are going. Use the documentation in the provided code to assist you in filling in any knowledge gaps left by the problem description. Helpful Tools and Tips: Review the final page for extra information that might be helpful for this assignment. Problem 1: Motion Sensor Submission: Update the function read_motion (location_name) in assignment3 .py and include this and any bonus flowcharts in your zip file. Context: The motion sensor occasionally turns on to see if anything is moving in the room and reports “detected” if it notices motion or “undetected” if nothing moved. Abnormal: Abnormal readings are readings where motion is detected at least once. File Format: Each line contains the name of a room followed by a colon, a space, and either detected or undetected. Example File: house.motion.txt 01 bathroom: undetected 02 living room: undetected 03 living room: detected 04 bathroom: undetected 05 kitchen: detected 06 kitchen: undetected Example Expected Result: read_motion ("house") # Returns :["living room" , "kitchen"] Process: Loop until reaching an empty line. In each loop, use split () to help determine if the room should be appended to the rooms list that you are returning or not. Make sure not to append the room if the rooms list already contains the room. Problem 2: EMF Sensor (Electromagnetic Fields) Submission: Update the function read_emf (location_name) in assignment3 .py and include this and any bonus flowcharts in your zip file. Context: An EMF sensor detects electromagnetic field fluctuations in an area, such as those produced by cellphones, radios, or power lines. We will measure this on a scale of 0 to 5. Abnormal: Any room that has an average EMF reading greater than or equal to 3 will be considered “abnormal” … for some reason. File Format: A line contains either a room name or an EMF reading, as a whole integer, for the most recently written room. Example File: house.emf.txt 01 bathroom 02 1 03 2 04 1 05 living room 06 5 07 4 08 2 Example Expected Result: read_emf ("house") # Returns :["living room"] Process: Design your own approach for this, but here are a few useful tips: • Strings have the .isdigit () method, which returns True if the string is an integer. E.g., "4" .isdigit ()== True and "cat" .isdigit ()== False • Average: One method to take an average reading for each room is to, starting with a count and total of 0, add each reading to your total sum and increase count by 1 each time you get a reading. When finished with that room, get the average by dividing the total sum by the number of readings. • Rolling Average: Alternatively, you can maintain a “rolling” average that updates with each new value, , with: avgnew = avgold + n/x−avgold , where n is the total number of values so far. • You will likely need some variables to track what the current room is, what the current count and average are, and possibly the current total. • Remember: Depending on your method, make sure that you check the average of the final room before returning your function! It’s surprisingly easy to miss. Problem 3: Temperature Sensor Submission: Update the function read_temperature (location_name) in assignment3 .py and include this and any bonus flowcharts in your zip file. Context: Temperature is tracked in celsius with real numbers. Anything below 0 is considered freezing, where water freezes. Abnormal: According to this club, any room that consistently reads temperatures under 0.0 for 5 readings in a row is considered to be an abnormal room. Who isn’t afraid of the cold, I guess? File Format: This data is stored as “Comma Separated Values”, a very common format. On each line, you will find the room name, a timestamp, and a temperature, separated by commas. The readings for a room will always be one after another. Restriction: You are not permitted to include additional modules, such as the csv module, to assist with parsing this data. Example File: house.temp.txt 01 bathroom ,1700001 ,25 .3 02 bathroom ,1700002 ,25 .1 03 bathroom ,1700003 ,24 .3 04 bathroom ,1700004 ,26 .1 05 bathroom ,1700005 ,-10 .4 06 bathroom ,1700006 ,25 .3 07 bathroom ,1700007 ,24 .8 08 garage ,1701200 ,30 .1 09 garage ,1701201 ,20 .5 10 garage ,1701202 ,-5 .1 11 garage ,1701203 ,-2 .4 12 garage ,1701204 ,-10 .1 13 garage ,1701205 ,-5 .0 14 garage ,1701206 ,-20 .1 15 garage ,1701207 ,-9 .1 16 garage ,1701208 ,10 .4 17 garage ,1701209 ,26 .5 18 kitchen ,1700030 ,25 .0 19 kitchen ,1700031 ,24 .5 20 kitchen ,1700032 ,-10 .4 21 kitchen ,1700033 ,30 .4 22 kitchen ,1700034 ,28 .7 Example Expected Result: read_temperature ("house") # Returns : ["garage"] Process: Use split () to help you get the name of the room and the temperature in the room on each line until you reach the end of the file. You will need to keep track of when you switch from one room to the next, similar to your EMF implementation. You will need a variable to keep track of how many negative readings in a row you have read, and remember to reset your counter when you start looking at a new room! Problem 4: Generate a Report Submission: You will need to create two new functions in assignment3 .py, get_unique__rooms () and generate_report () as described below. Include documentation for these. You do not need to include type hints (e.g., motion : list [str] or -> str), but you can if you wish. Context: According to this ghost hunting club, you can identify what type of “ghost” you are dealing with by looking at combination of evidence; for example, they say that if it is cold and there is some electrical interference, it must be a “phantom” … I don’t know. Either way, a job is a job. Helper Function: First, we will make ourselves a helper function. A helper function helps us separate out a small piece of functionality to improve the readability of our code, even if it is just for a single function. Define a function called get_unique__rooms (motion , emf , temperature) that accepts three lists of strings and returns a list of strings. It will look at each list of rooms and return a new list of all rooms that appeared in any of the input lists. This will be helpful when we are generating our report after. The function can start by making an empty list,looping through each list of rooms passed as arguments, and appending the room to the new combined list if it hasn’t already been added. Generate Report: Define a function called generate_report (location_name , motion , emf , temperature) that will not return anything. Motion, emf, and temperature, are lists of abnormal rooms - the ones that we get by calling read_motion (), read_emf (), and read_temperature (). It will compare the rooms in each list, identify any “ghosts” based on the provided criteria, and write these results to a file. The file must be named {location_name} .report.txt, replacing the location name with the location name that was passed as an argument. Evidence Criteria: • Motion + EMF + Temperatures = Poltergeist • Motion + EMF = Oni • Motion + Temperatures = Banshee • EMF + Temperatures = Phantom On each line of the report Example Result: house.report.txt, extra text not needed 01 == Raven Ghost Hunting Society Haunting Report == 02 Location : house 03 Found Poltergeist in closet 04 Found Oni in bathroom Process: Use your get_unique__rooms () function to get a list of all of the rooms that could possibly show up in the motion, EMF, or temperature data. Then, cleverly loop through those rooms and check if the room can be found in the different evidence lists. If it meets the criteria, add the message to the report, and write that report to a file. NOTE: Your program must handle multiple runs where we get a new report each time. That is to say, you must either clear the report before appending to it, or write all of the data to the report at once. Marking Scheme This marking scheme is subject to change if unexpected cases arise that need to be addressed to meet the outcomes discussed in the specification or if major requirements are bypassed. Execution marks can only be provided for code that we can observe running code. We may execute a different version of the tester that you were provided with which contains additional test cases, and the tester is provided as guidance. Problem 1 (7 Marks): • Execution: Tested by running code and observing output ‣ 1: Has results, and no duplicates in result ‣ 3: Result contains only, and all of, correct outputs • Approach: Tested by reviewing code ‣ 1: Correctly Opens/Closes File ‣ 2: Correctly loops through every line in the file Problem 2 (10 Marks): • Execution: Tested by running code and observing output ‣ 1: Has results, and no duplicates in result ‣ 6: Result contains only, and all of, correct outputs • Approach: Tested by reviewing code ‣ 1: Correctly Opens/Closes File ‣ 1: Correctly loops through every line in the file ‣ 1: Distinguishes room names from readings Problem 3 (11 Marks): • Execution: Tested by running code and observing output ‣ 1: Has results, and no duplicates in result ‣ 6: Result contains only, and all of, correct outputs • Approach: Tested by reviewing code ‣ 1: Correctly Opens/Closes File ‣ 1: Correctly loops through every line in the file ‣ 2: Function is well documented Problem 4 (12 Marks): • Execution: Tested by running code and observing output ‣ 2: get_unique__rooms correctly identifies rooms from all lists without duplicates ‣ 2: Correctly writes to a new file with correct filename ‣ 3: Correctly identifies each kind of issue without any false positives in the report • Approach: Tested by reviewing code ‣ 1: Correctly Opens/Closes File ‣ 2: Does not duplicate logic intended for the get_unique__rooms () function ‣ 2: Function is well documented Deductions: In addition to standard packaging deductions, you may receive deductions for, at minimum, using incorrect function names, including print statements in functions that should not print, accepting incorrect parameters, incorrect return values, or other similar deviations from the specification. Helpful Tips and Information Reminders • Element in a list: You can check if a value is in a list using the in keyword, e.g., "a" in ["a" , "b" , "c"] is True. • End of a File: When a file no longer has any lines to read, readline () will return "". • Lines from a File: readline () will often include spaces and newline characters, so using .strip () can be very helpful. • String Functions: You will likely need use of .strip () and .split () will surely be helpful as well. A Different File I/O Syntax We did not discuss this in class, but there is a helpful alternative syntax for opening files that you are permitted to use and is generally considered the “correct” syntax. This is discussed briefly in Chapter 11.7 of the course textbook. The Problem: Having to always remember to close an open file can lead to problems. While we could blame the programmer for this mistake, in programming, we usually try to make sure that important functionality occurs whether somebody makes a mistake or not. The Solution: Context Managers. The specifics of context managers are not discussed much when learning Python, but they are used in a few specific circumstances; using files being one of the most common. Python provides a special keyword structure: with ..as. Similar to how a for loop abstracts some of the counters away from us to simplify our code and reduce errors, the with keyword hides some of our open/ close logic. It is much safer to use, because even if an error occurs while working with the file, the with keyword makes sure to close the file for us. Here is an example of working with a file using with: 01 def main () : 02 filename = "my_file .txt" 03 04 my_text = "" 05 06 # Stores the open file reference as "f" while in this with block 07 with open (filename , "r") as f : 08 my_text = f .read () 09 10 print(my_text) Using with is definitely best practice when working with files and is encouraged, but not strictly required.