PSTAT 173 FINAL EXAM RISK THEORY MARCH 14, 2022 Problem 1. Let X ~ Gamma(Q = 2, θ = 2). Compute: (1) VaR0.975 (X) (2) eX (9.488) (3) TVaR0.975 (X) and TVaR0.95 (X) Problem 2. A company insures a fleet of vehicles. Aggregate losses have a compound Poisson distribution. The expected number of losses is 50, and the amount of each loss is assumed to be exponential with parameter θ = 2000. We modify this coverage by in the following ways: (1) A deductible of 100 is imposed (2) It can be assumed that 10% of claims will not be covered (i.e., a benefit payment will not be made to the policyholder in these cases) What is the expected amount paid by the insurer? Problem 3. A towing company provides all towing service to members of an Auto- mobile Club. You are given: Towing Distance Towing Cost Frequency 0-4.99 miles 100 40% 5-14.99 miles 150 40% 15-29.99 miles 200 15% 30+ miles 250 5 % With the following stipulations: (1) The automobile owner must cover 10% of the towing cost; the rest is covered by the Club (2) The number of towings is Geometric(β = 50) (use the Appendix parameteri- zation) (3) The number and cost of towings are independent Using a normal approximation, what is the minimum amount will the Club need to set aside to cover all claims with a probability of at least 0.9? Problem 4. You are given: (1) Losses follow an exponential distribution with the same mean every year (2) The Loss Elimination Ratio this year is 55% (3) The ordinary deductible in the upcoming year is 3/2 the current deductible Calculate the Loss Elimination Ratio for the upcoming year Problem 5. The random variable for a loss X has the following characteristics: x F (x) E(X ^ x) 0 0 0 20 0.3 121 50 0.8 355 150 1.0 425 Calculate the mean excess loss for a deductible of 25 using linear interpolation for E[X ^ 25] and F (25) (i.e. for F (25), it lies on the straight line connecting F (20) and F (50)). Hint: The CDF value at x = 150 should make E[X] easy to compute. Problem 6. You are given: (i) Losses follow a single-parameter Pareto distribution with density function: f(x) = xα+1/α, x > 1, 0
ELEC3575 Electric Power Systems Coursework 2, December 2024 Software/Calculator instructions: · You are allowed to use MATLAB, calculator or a computer calculator in this assessment. Dictionary instructions: · You are allowed to use your own dictionary in this assessment and/or the Spell Checker facility on your computer. You are not marked on spelling, punctuation or grammar in this assessment. Assessment Information: · There are 9 pages and 16 questions to this online assessment. · You will have 7 days to complete the assessment. · You are recommended to take a maximum of 5 hours within the time available to complete the assessment. · This assessment is worth 70% of the overall module mark. · The deadline for submission of your assessment is 14:00, UK time, 13/12/2024. · Please submit your assessment to the ‘Submit Your Work’ area in the module’s Minerva page. · Please include your Student Identification Number (SID) in the title of your submission. · Please include your Student ID Number and the Module Code at the top of each page of your submission. · If there is anything that needs clarification or you have any problems, please email [email protected] and copy in [email protected] and we will respond to you as quickly as possible within normal working hours UK time (9:00-17:00 hours, Monday-Friday). · You must not discuss or share the content of or answers to this assessment, with any fellow students, any staff or other contacts outside the school or the University’s professional services. School contacts available to you will be detailed in the bullet point above. Submission instructions: · You must submit your assignment no later than the submission deadline of 14:00, UK time, 13 of December 2024. · Your work should be submitted using the Turnitin link within the Minerva Module Page. · You should receive an e-mail receipt from the Turnitin system to confirm that your work has been properly submitted. This coursework has three main parts (Parts A, B and C) and a total of 16 Questions. You will need to carry out Tasks A1-A6 for answering Questions 1-5. Tasks B1-B4 must be carried out for answering Questions 6-10. These tasks are just enabling steps you need to take and there is no need for you to show or explain how you conducted the tasks. Question 11-16 are fundamental questions aimed at assessing what you have learned in ELEC3575 about electric power systems. You must submit your answers to Questions 1-16 in one single document via the link provided on the module’s Minerva page. In summary, what you are expected to do for this coursework is: Part A: Run AC power flow on a power system modelled in MATLAB/Simulink and discuss the results. Answer Questions 1-5. Part B: Perform. DC power flow on this power system in MATLAB/M-file and compare the results with the AC power flow results. Answer Questions 6-10. Part C: Answer some fundamental questions (Questions 11-16) about electric power systems based on what we have discussed in the lectures, screencasts and the handouts given to you. Important: You have the option of not running AC and DC power flow studies yourself, in which case you must use the power flow results and network data provided in the "NoPowerFlow.doc" document. However, this option will result in a loss of 26 marks, as you will not be able to answer Questions 3, 4, 9, and 10, which require additional power flow studies. Please write N/A in front of these questions in your solution sheets, if you choose this option. Preamble: In this coursework, the active power of load at bus 1 depends on your student ID number. In Parts A and B, PL1 is set to be 40+XX/5 where XX are the last two digits of your student ID number. You should round the value of PL1 to the Your own PL1 value: PL1=40+XX/5= closest integer number. For example, if your student ID number is 200123065, you must use the following PL1 value to attempt this exam: PL1= 40+XX/5= 53 MW (rounded to the closest integer) Part A: AC Power Flow For Part A you have two options. You can run an AC power flow on the power system shown in Figure 1 (as per Tasks A1-A7) and answer Questions 1-5. Otherwise, you may decide to use the results provided in "NoPowerFlow.doc". If you choose this option, you must mention this at the beginning of your solution sheet and enter N/A in front of Questions 3 and 4, as you will not be able to answer these questions without running the power flow of your own. Tasks A1-A7 and Questions 1-5 Task A1: Download the “ELEC3575_Power_Systems_Coursework2.slx” file from Minerva and save it on your local PC. Task A2: You can read Power Flow-Matlab.docx document provided on how to open/run the file using an installed or online MATLAB. Then find “MATLAB” in the program manager of Windows and start it (or use the online version as explained). Task A3: Open the downloaded “.slx” file with MATLAB. Task A4: Now you should see a network in the Simulink workspace. The network represents the 110-kV system shown in Figure 1. Task A5: Double click on the lines to check if they are set with the parameters listed in Table 1. Task A6: Ensure the demand and generation at all buses match those listed in Table 2. You must replace the active power of load at bus 1 with PL1 you calculated based on your student ID. In the opened dialogue box, you may check the bus type in tab “load flow”. Figure 1. 110 kV power system under study. Figure 2. Simulink schematic of the power system under study. Table 1: Line characteristics. Line Length [km] Resistance (R′) [Ω/km] Inductance (L′) [mH/km] Capacitance (C′) [µF/km] Line 1-2 160 0.016 1.30 0.009 Line 1-3 150 Line 1-4 100 Line 2-3 110 Line 3-4 75 Table 2: Load and generation data. Bus No. Load Generation Bus Type P (MW) Q (MVAR) P (MW) 1 PL1* 18 ** Slack bus 2 90 20 120 *** PV 3 40 18 0 PQ 4 40 10 0 PQ * The value of PL1 is student specific and is calculated as explained in the Preamble section on page 2. ** Note that the exact amount of P for the slack generator is determined by AC power flow. *** Bus 2 is a PV bus, and the generator at that bus is set to maintain the voltage of bus 2 at 0.98 pu (V2= 0.98 pu). Task A7: To run a load flow for the system, double-click on the box “powergui” and then go into tab “Apps” and select “Load Flow Alalyzer”. A window for the power flow results will pop-up, then click on the button “Compute”. The power flow results are shown in the last five columns of the window, similar to what you can see in Figure 3. Figure 3. Sample power flow results shown in the dialogue box. Note: Sample results shown in Figure 3 are inaccurate and are used to give you an idea of what to expect to achieve after running Task A7. Questions 1-5 (26 Marks Overall) Question 1 State what your own PL1 is and check if the voltage magnitudes at different buses are within the acceptable range (i.e., 5% of the nominal voltage). [4 marks] Question 2 Manually calculate the active and reactive power flows through each line in MW/MVar. As you know, the power flows from the sending- and receiving- end of each line are not necessarily identical and are to be calculated and reported. We model each transmission line simply by a series reactance. In this way, the active/reactive power flow from bus i to bus j can be approximated by where Vi is the voltage phasor of bus i in kV, Xij is the reactance of the line connecting bus i to bus j in Ohms. To calculate these, you may import the required data into an m-file and calculate active and reactive power transferred through each line using a code you write. Note that you will need these power flows through lines for comparison studies later in Question 8. [5 marks] Question 3 What is the sum of the active powers generated by G1 and G2 and is this sum greater than, equal to or smaller than the sum of active powers consumed by the loads? Explain what this difference implies. [4 marks] Question 4 Double-click on generator G2, go into the tab “load flow”. Set the “Active power generation P(W)” to 150 MW (i.e. 150e6). Run a load flow and evaluate the load flow results. Why have the generated active and reactive power by G1 (as well as the reactive power by G2) also changed? [7 marks] Question 5 Based on our discussion in ELEC3575, name two options as the initial guess for solving the AC power flow problem using iterative methods such as Newton-Raphson. Based on what we have discussed in the lectures, explain why these options can be considered appropriate. [6 marks] Part B: DC Power Flow For this part, you must use the data listed in Tables 1 and 2. You are recommended to write an m-file code for DC power flow calculations. Please reset the load and generation values to match those in Table 2. Note that you have modified the generation of G2 in Question 4, so make sure to revert that change in particular. Otherwise, you may decide to use the results provided in "NoPowerFlow.doc". If you choose this option, you must mention this at the beginning of your solution sheet and enter N/A in front of Questions 9 and 10, as you will not be able to answer these questions without running the power flow of your own. Tasks B1-B5 and Questions 6-10 For your convenience, the line parameters are already saved in a matrix named “Line_Data.mat”. You are supposed to add your code to this m-file such that it does the DC power flow calculations on the power system under study. Your m-file must contain the sections as explained in Task B1 to Task B4. Task B1: Calculate the line reactance in Ohms. Ignore R′ and C′, and G′ of the lines and compute the series reactance of the line considering their “length”, per unit length “inductance”, and the system nominal frequency (50 Hz). Task B2: Calculate the line reactance in pu. Use a base voltage (110 kV) and a base apparent power (100 MVA) to calculate the base impedance, then convert the line reactances to their per-unit values. Task B3: Form. the nodal admittance matrix including all transmission lines. You may find useful guidance in the lecture notes about this. Eliminate the row and column corresponding to bus 1. Task B4: Form. the vector of net active power injections. The required data are included in Table 2. Do not forget to convert Active Power generations to per-unit values. Do not forget that you need to eliminate the reference bus from this vector. Questions 6-10 (32 Marks Overall) Question 6 Manually calculate the phase angles of all buses. Hint: You should do this by multiplying the inverse of the reduced nodal admittance matrix by the vector of net active power injections (. Note that phase angles calculated from this will be in radians. [6 marks] Question 7 Using the formulas below, calculate the active power flows through the lines (from sending end and receiving ends of each line). Hint: The active power flow from bus i to bus j can be approximated as follows in DC power flow calculations: where is the voltage phase angle at bus i in radians, and Xij is the reactance of the line connecting bus i to bus j in Ohms. [5 marks] Question 8 Compare the DC power flow result with that of AC power flow. You need to compare voltage phase angles of all buses and the active power flow of all lines obtained by DC power flow with your findings in Part A (AC power flow results). Explain the reason for their difference. Which one is more reliable, and why? [6 marks] Question 9 Double click on load L3 at bus 3 and set its “Inductive reactive power QL” to 100 MVAR (i.e. 100e6). Then, run an AC power flow and examine whether the power calculation converges. Does DC power flow calculation change for this new condition? Compare DC power flow result with that of AC power flow. Is DC power flow still accurate and acceptable? Why? [7 marks] Question 10 Double click on load L3 at bus 3 and set its “Inductive reactive power QL” to 200 MVAR (i.e. 200e6). Then, run an AC power flow. Does the power calculation converge? Can you rely on DC power flow results in this case? Why? Hint: You will need to use the concept of P-V curves to justify your answer to the “why” part of this question. [8 marks] Part C: Fundamental Questions Questions 11-16 (42 Marks Overall) Question 11 Based on our discussions in ELEC3575, explain the main advantage of using the per-unit system when it comes to power transformers. Is achieving this advantage conditional or always guaranteed? Explain why. [6 marks] Question 12 Explain the “N-1” security criterion. Detail how you can check the N-1 security criterion for the power system shown in Figure 1, based on the assumptions we made in ELEC3575 regarding power systems. To this end, you need to state what study is required to be carried out and for how many times. Which variables will you need to check every time this study is performed? [6 marks] Question 13 Based on the discussions we have had over the course of ELEC3575, why should the base power in the per-unit system be maintained constant in the entire system irrespective of the voltage level? [6 marks] Question 14 Based on the discussions we have had over the course of ELEC3575, what is the main difference between the power flow study and solving a large linear circuit with lumped elements? [6 marks] Question 15 Discuss how operating the system closer to its boundaries (e.g. steady-state stability margin and permissible voltage limits) will affect the accuracy of DC power flow. [9 marks] Question 16 Assume that the DC power flow has been carried out for a power system (with several lines and buses) where bus 1 is taken as the slack bus. Using the solution of the DC power flow, parametrically calculate the reactive power at the sending-end and receiving-end of the line connecting bus k and bus m in this system and see what conclusion can be drawn. The line connecting these two buses is simply modelled by the series reactance Xkm. [9 marks]
Lecture 2: Excel Solver ECON10151: Computing for Social Scientists September 29, 2024 Excel’s Solver is a versatile tool designed to help users find optimal solutions to complex decision-making problems. Whether you’re allocating resources in finance, managing supply chains in logistics, or scheduling operations, Solver allows you to work within constraints and identify the best outcome, such as maximising profits or minimising costs. For example, imagine you’re managing a factory and need to determine the ideal production levels of two products, given a limited supply of materials and labour. Solver can help you calculate the most efficient allocation that maximises profit while staying within your resource limits. In essence, Solver takes your objective—like increasing profit or reducing expenses—and tests different combinations of variables, subject to the constraints you set. It simplifies decision-making where trade-offs are involved, ensuring the result is not just mathematically sound but also practical for real-world scenarios. 1 How to Install Excel Solver Solver is an Excel add-in that doesn’t load automatically when you install Excel, but it’s easy to enable. Whether you’re using a Mac or Windows, the steps are straightforward, though they differ slightly between the two operating systems. 1.1 Mac To install Solver on a Mac, follow these steps: 1. Open Excel. 2. Click on the Tools menu at the top. 3. Select Excel Add-Ins. 4. In the Add-Ins available list, tick the box for Solver Add-In. 5. Click OK. 6. Note: • If Solver Add-In isn’t listed, click Browse to find and install it. • If you’re prompted to install the Solver Add-In, select Yes. 7. Once installed, you’ll see the Solver button under the Data tab. 1.2 Windows To install Solver on Windows, follow these steps: 1. Open Excel. 2. Click File in the top-left corner, then select Options. 3. In the Excel Options window, click Add-Ins. 4. At the bottom of the window, next to Manage, ensure Excel Add-ins is selected, then click Go. 5. Tick the box for Solver Add-In in the Add-Ins available list. 6. Click OK. 7. Note: • If the Solver Add-In is missing from the list, click Browse to locate and install it. • If asked to install the Solver Add-In, select Yes. 8. Once installed, you’ll find the Solver button in the Analysis group on the Data tab. 2 A Worked Example: Diet Optimisation In this example, we’ll explore a practical scenario often faced by fitness trainers and dieticians: how to create a cost-effective yet nutritionally balanced meal plan. Using Excel Solver, we’ll navigate through the constraints of this problem to optimise our meal choices,aiming for the best nutritional outcome at the lowest cost. Let’s consider four meal options: Salad, Protein Shake, Grilled Chicken, and Pasta. Each of these meals offers different nutritional values and comes with a specific cost: The challenge is to create a meal plan that meets specific nutritional goals while minimising the total cost. In this case, the goals are: • At least 1800 calories per day, • A minimum of 90 grams of protein, • No more than 45 grams of fat. Here’s how Solver comes into play. We need to select the number of servings of each meal (Salad, Protein Shake, Grilled Chicken, and Pasta) that together satisfy these nutritional requirements. At the same time, we aim to minimise the total cost of the meal plan. This type of problem is ideal for Solver, as it allows us to work within defined constraints (calories, protein, and fat limits) while optimising for cost. For example, you might have a client with a fixed daily budget but also the need to maintain certain nutritional standards. By inputting the nutritional data and cost for each meal into Excel, Solver can identify the most cost-effective combination that achieves the target nutritional intake. This not only saves time but also ensures that the plan is scientifically backed by quantitative analysis. 3 How to Use Solver in a Nutshell To effectively use Excel Solver for optimisation problems, follow these key steps. Solver works by varying the values of specific variables (behind the scenes) within the limits you define to find the best possible solution to your problem. Here’s a simple guide to get started: 1. Construct a Detailed Spreadsheet: Start by organising your spreadsheet with all relevant data. Make sure that the problem components—like costs, nutritional values, or other important factors—are clearly laid out so that Solver can interpret them correctly. • Identify Decision Variables: Decision variables are the values Solver will adjust to find the optimal solution. In Excel, these are also known as Changing Cells. For example, in the diet optimisation problem, the decision variables are the number of servings of each meal option. • Define the Objective Function: The objective function is what you want to optimise, such as minimising cost or maximising profit. In Solver, this is referred to as the Set Objective. In our diet example, the objective function is the total cost of the meal plan, which we aim to minimise. • Incorporate Constraints: Constraints are the rules or limits that your solution must follow, such as nutritional needs or budget limits. These ensure that Solver’s solution makes sense in real-world situations. In our case, the constraints are the minimum and maximum nutritional goals, like needing at least 1800 calories and no more than 45 grams of fat. 2. Run Solver: Once your spreadsheet is set up with the decision variables, objective function, and constraints, you’re ready to run Solver. Head to the Data tab, click on Solver, and it will begin adjusting the decision variables within your constraints to find the best possible outcome. 3. Review the Solution: After Solver has finished, it will display the optimal solution directly in your spreadsheet. This will include the values for the decision variables that best meet your objective, while adhering to the constraints. At this point, you can check the results and ensure they are sensible for your particular problem. By following these steps, you can confidently use Solver for various optimisation challenges, whether it’s finding the best resource allocation, balancing a budget, or creating cost-effective diets. Solver handles the complex calculations, leaving you to focus on analysing the results and making informed decisions. 4 Solving the Diet Optimisation Problem 4.1 Setting up the Spreadsheet To solve the diet optimisation problem using Excel Solver, we need to organise our data so that Solver can process it efficiently. This involves defining the decision variables, setting up the objective function, and establishing the necessary constraints. Follow these steps to set up your spreadsheet: Step 1: Define the Decision Variables • In cells B2 : E2, create headings for each type of food (e.g., Salad, Protein Shake, Grilled Chicken, Pasta). • In cells B3 : E3, enter initial trial values for the amount of each food to include in the meal plan. Make sure at least one of the values is greater than zero to allow Solver to work with a non-empty starting point. Step 2: Set up the Objective Function • Reference the number of units of each food from your decision variables by entering =B3, =C3, etc., in cells B7 : E7. – It’s important to reference the number of units rather than manually typing them. By referencing, any changes made to the decision variables (in B3 : E3) will automatically update the rest of your calculations. This not only saves time but also reduces the risk of errors, ensuring consistency across your calculations. • In cells B8 : E8, input the cost per unit for each food item (e.g., 6.5 for Salad, 5 for Protein Shake, etc.). • To calculate the total cost of the meal plan, use the SUMPRODUCT function in cell B10. The formula will look like this: = SUMPRODUCT(B7 : E7, B8 : E8) The SUMPRODUCT function multiplies the number of units of each food (in B7 : E7) by its respective cost (in B8 : E8), and then sums the results. This function essentially performs element-wise multiplication of B7 × B8, C7 × C8, and so on, then adds them together. The formula is equivalent to: Total Cost = B7 × B8+C7 × C8+ D7 × D8+ E7 × E8 This gives the total cost of the diet based on the quantities you have selected for each food. Step 3: Establish the Constraints • Recreate the table from the problem statement, listing the nutritional information (calories, protein, fat) for each food item in cells B14 : E16. • Use the SUMPRODUCT function again to calculate the total nutrients based on the amounts chosen in the decision variables. For example, to calculate total calories, use: = SUMPRODUCT($B$7 : $E$7, B14 : E14) This will give you the total calories consumed based on the servings of each food. – Note: The dollar signs ($$) in the formula ensure that the cell references remain fixed (absolute references) when copying the formula to other cells. • In Column G, specify the inequalities for your constraints (e.g., >= 1800 for calories, Solver to open the Solver Parameters Dialog Box. 2. Set the Objective and Problem Type In this step: • In the "Set Objective" box, specify the cell that calculates the objective function (e.g., Cell B10, which calculates the total cost). • Choose whether you want to minimise or maximise the objective. For our problem, since we want to minimise the total cost, select " Min". 3. Identify Decision Variables Next, specify the cells that represent your decision variables: • Click in the "By Changing Variable Cells" box. • Select the cells containing the decision variables (e.g., B3 : E3). These are the cells Solver will adjust to find the optimal solution. 4. Add Constraints To ensure that Solver respects the constraints in the problem: • Click the "Add" button on the right. • In the "Cell Reference" box, select the cell that calculates the total for the constraint (e.g., for calories, choose Cell F14, which sums the total calories consumed). • Choose the appropriate constraint type (=, =) and then input the target value for that constraint (e.g., Cell H14, which specifies at least 1800 calories). • Click "OK" to add the constraint. Repeat this process for each constraint (e.g., protein, fat) to ensure Solver respects all nutritional requirements. 5. Make Variables Non-Negative Ensure all decision variables remain non-negative: • Check the box titled "Make Unconstrained Variables Non-Negative". This ensures that all variable values are greater than or equal to zero, meaning Solver won’t suggest negative servings of food. 6. Select the Solving Method Choose the solving method: • Select "Simplex LP" as the solving method. This method is appropriate for linear programming problems like this one. 7. Solve the Problem Once everything is set up: • Click the "Solve" button to run Solver and find the optimal solution. Solver will adjust the decision variables and provide a solution that minimises the cost while meeting all the constraints. 4.3 Solution Once you’ve completed the Solver process, you’ll notice the following changes in your spreadsheet: • The Solver Parameters Dialog Box will close automatically. • The values in the decision variable cells (e.g., B3 : E3) will update to reflect the optimal solution that Solver has found. • As a result, the objective function and caluculations under constraints in your spreadsheet will also adjust to reflect the updated decision variables. This updated information indicates that Solver has successfully applied the optimal solution to your optimisation problem. 4.4 Final Remark: In practical applications, serving sizes are usually represented as whole numbers (e.g., you can’t eat half a serving of grilled chicken). If the optimal solution contains fractional serving sizes, this might appear unusual. Question: How might we adjust our constraints or model to ensure that the resulting serving sizes are whole numbers? Answer: To ensure that Solver produces whole numbers for the serving sizes, you need to adjust the constraints to require integer values for the decision variables. Follow these steps: • Open the Solver Parameters Dialog Box again by clicking on Data > Solver. • Click the "Add" button to introduce a new constraint. • In the "Cell Reference" box, select the cells representing the decision variables (e.g., B3 : E3). • In the "Constraint" box, select int (integer), which ensures that the values for the decision variables are restricted to whole numbers. • Click "OK" to add the constraint, and then click "Solve" again to re-run Solver with the updated settings. By adding this integer constraint, you ensure that Solver provides solutions with whole number servings, which makes the results more practical for real-life applications.
Master of Science in Enterprise Risk Management Coding for Risk Management Course Overview Risk Management requires programming. The tasks that we might think are specific to business analysts are becoming common throughout companies. And, the approach of using shared Excel files across a network is being replaced with sharing database access firmwide and processing the data with SQL and code. Even at the most senior levels, decision makers must be able to casually grab and view datasets and run scripts. As tools emerge to automate tasks and make analysis more friendly, facility with programming is required to interface and take advantage of the tools. Ironically, automation requires more facility with programming. The reason for all of this is that the benefit is so high. Companies can find information, communicate it, and make decisions faster using automation. Coding for Risk Management provides the knowledge that students need to thrive in today’s businesses. The course offers a hands-on approach to studying the common tools of SQL for data gathering, Python for data analysis, R for analytics and data visualization, and Amazon Web Services for the use of Cloud infrastructure for secure and scalable infrastructure. These tools are explored by coding up risk management concepts that appear in Market Risk, Credit Risk, and Insurance Risk. Students have the opportunity to learn the landscape of different syntaxes and be ready to adopt the local programming language and technical conventions of whatever firm they work at. Learning Objectives At the end of the course, students will be able to: • L1 Code up essential risk management concepts in the Python and R programming languages • L2 Adhere to and opine on best coding practices • L3 Decide on which languages are best for different tasks • L4 Rapidly adapt to and learn new syntaxes • L5 Query internal corporate databases and external web resources to gather and organize data • L6 Visualize data and create interactive dashboards for decision making Readings Forta, Ben. (2019). SQL in 10 Minutes a Day. Sams Publishing; 5 edition (December 20, 2019). Lander, Jared P. (2017). R for Everyone. Addison-Wesley Professional; 2nd edition (June 18, 2017). Yan, Yuxing. (2017). Python for Finance 2nd Edition. Packt Publishing 2 edition (June 30, 2017). Yan, Yuxing. (2018). Financial Modeling using R 2nd Edition. Legaia Books USA; 2nd edition (January 18, 2018). Assignments and Assessments Weekly programming assignments will enable students to immerse themselves in different programming languages and styles of coding. The assignments will be graded without partial credit; students will need to meet the challenge of producing successful code that accomplishes the assigned tasks. The final exam is a repeat of these coding challenges. Small case studies will allow students to work on more complete programming projects. Case studies may very across different semesters. Here are examples: Case Study: FRTB Capital Estimation: Estimate the capital requirement for a small bank based on the bank’s trading data held in a SQL database. Case Study: Merton Model Default Risk Estimation: Estimate the Default Risk of a company using Merton’s model Case Study: Altman’s Z Score model of credit risk based on a balance sheet. Refit the classic Altman’s Z Score model for data relevant to a specific industry. Case Study: Agent Based Modeling. Use an agent-based modeling framework to model the behavior. of economic agents. Case Study: CDO Default Risk Estimation: Use copula modeling functionality in R to estimate and backtest correlated defaults. Case Study: Counter-party Credit Risk Modeling: Estimated the counter-party credit risk across a firm by gathering its trade data from a databased and building appropriate model for each asset type. Case Study: Lending Data: Grab Lending Clubs default data and fit a model to it. Grading The final grade will be calculated as described below: FINAL GRADING SCALE Grade Percentage A+ 98-100 % A 93-97.9 % A- 90-92.9 % B+ 87-89.9 % B 83-86.9 % B- 80-82.9 % C+ 77-79.9 % C 73-76.9 % C- 70-72.9 % D 60-69.9 % F 59.9% and below Assignment/Assessment % Weight Individual or Group/Team Grade Mini Case Studies 60 Individual Final Exam 30 Individual Participation 10 Individual
ENG 105 Thermodynamics Due Date: 10th December (Tuesday) by 11:59 PM. Late work will NOT be accepted! Total Points Achievable: 60 points Bonus Assignment: A case study on working fluid selection for a geothermal plant Instructions: • Answer the provided questions and provide your answers in a report format. • Fluid property evaluation: You may use the link to the website provided below to obtain the properties of various working fluids. You will have to download a small thermodynamic state calculator to compute the properties of various fluids. The interface of the application is very user friendly. • Thermo-State Application (Free): https://thermo-state.github.io/ Case Study: You are tasked with the planning and design of a 50 MW geothermal power plant. A schematic of the geothermal plant is shown in the figure below. A geothermal powerplant uses the thermal energy of the hot rocks below the earth’s surface to heat the working fluid. Boreholes are drilled several kilometers into the earth’s surface to reach the hot rocks. Next a suitable working fluid is pumped through the boreholes and is heated at high pressure by a heat exchanger (“boiler”) near the hot rocks below the ground level. Finally, the heated working fluid is passed through a power cycle of choice to generate electricity. The ambient temperature at the ground level, TC is 30 。C. The temperature beneath the surface increases at a rate of 25 。C/km of depth into the lithosphere. You would like the maximum temperature of your cycle, TH, to be at least 200 。C. a. Determine the depth,H, in km, of drilling needed to reach a TH of 200 。C. (2 pts) b. Calculate the maximum thermal efficiency that your powerplant can achieve. (3 pts) You have decided to use a Rankine cycle to design the power cycle for your geothermal plant. Your next task will be to make a decision on the working fluid for the power cycle. Three fluids are being considered namely: 1. Water 2. Ammonia 3. R22 For each fluid, the boiler’s operating pressure should be such that the vapor quality at the turbine exit is at least 88%. Assume the turbine and pump are isentropic, and the working fluid is heated to a temperature of TH at the boiler exit. c. Determine the operating pressure in the boiler of the cycle for each fluid (10 pts) d. Determine the state of the fluid at the turbine inlet and draw the T-s diagram of the power cycle for each ofthe working fluids. (5 pts) e. Calculate the thermal efficiencies provided by each fluid (20 pts) f. Evaluate the mass of working fluid required for each fluid to generate a power of 50 MW. (5 pts) g. Compare the densities of the working fluids at the turbine inlet and outlet and comment on how the densities and total volume of the working fluids at these locations may affect your decision in selecting a working fluid. (5 pts) Report h. Present your results from the above analysis in a report no longer than 3 pages highlighting relevant information and how you arrive at your selection decision. In addition to efficiencies, consider external factors such as material costs, drilling costs, chemical compatibility, etc. and reference any useful resource that helps you arrive at your decision. (10 pts)
ECON10151 Lecture 3 Creating Charts, Tables and PivotTables Oct 2024 Learning Outcomes • Data Visualisation: understand the importance of data visualisation and learn how to create various types of charts (e.g., column, line, pie) in Excel, selecting the appropriate type for different data scenarios. • Chart Customisation: customise charts by adjusting titles, labels, colours, and styles to enhance clarity and visual appeal. • Table Creation and Management: be able to create, format, and manage tables in Excel, utilising features such as headers, filtering, and sorting to effectively organisedata. • PivotTable Analysis: earn to create and manipulate PivotTables to summarise and analyse large datasets, using Slicers for interactive data filtering. • Insights from Data: develop the skills to interpret data presented in charts, tables, and PivotTables, translating it into meaningful insights for informed decision-making. 1 Charts 1.1 Introduction Excel offers a variety of chart types, each suited for specific data and analysis needs. A good chart is one that effectively communicates data in a clear and accurate manner, enhancing the viewer’s understanding of the information presented. It should have a descriptive title that succinctly explains its purpose, guiding the audience on what to expect. Clear labelling of both the x-axis and y-axis, including any necessary units of measurement, ensures that the viewer can easily interpret the data. Choosing the appropriate chart type is crucial, as different types of data are better represented in specific ways. For instance, line charts are ideal for showing trends over time, while bar charts work well for comparing categorical data. The chart should maintain a consistent scale on both axes to avoid distorting the data, ensuring that the visual representation accurately reflects the underlying values. A well-designed chart is free of unnecessary clutter; excessive colours, 3D effects, or overly complex elements can distract from the key message. If there are multiple data series, a legend is essential to differentiate them, and annotations or highlights can help emphasise important trends or outliers. Including the source of the data adds transparency and credibility. Ultimately, a good chart delivers insights in a visually engaging yet straightforward manner. Below is a summary of the main types of charts in Excel and their typical uses. • Column/Bar Chart Purpose: To compare values across different categories. Suitable for: Discrete categorical data, such as sales figures for different products or population num- bers across countries. Example: Comparing sales of different product lines within a year. • Line Chart Purpose: To display trends over time or ordered categories. Suitable for: Time-series data or ordered data, such as monthly sales or stock prices. Example: Tracking the monthly revenue growth over a year. • Pie Chart Purpose: To show proportions or percentages of a whole. Suitable for: Categorical data representing parts of a whole, like market share or survey responses. Example: Displaying the percentage breakdown of company market share. • Scatter Plot Purpose: To visualise relationships between two numeric variables. Suitable for: Paired numerical data, such as height vs. weight or marketing spend vs. sales. Example: Examining the correlation between advertising spend and revenue. • Area Chart Purpose: o emphasise the magnitude of change over time, often cumulative. Suitable for: Time-series data where the cumulative value is important, such as sales over time. Example: Displaying total sales growth across different regions. • Histogram Purpose: To show the frequency distribution of numeric data. Suitable for: Data grouped into ranges, such as test scores or age groups. Example: Displaying the distribution of exam scores in a class. • Pivot Chart Purpose: To summarise and analyse large data sets interactively. Suitable for: Dynamic data summaries, such as sales data that can be grouped by region, product, or time. Example: Visualising total sales by region and product. 1.2 Tasks *Please download the Excel file named L3 Data from Blackboard, which contains the data for the tasks in this lecture.* Data Description The dataset in Worksheet Charts contains economic indicators for the United States, United Kingdom, and China from 2004 to 2023. The key variables in the dataset are: GDP (Gross Domestic Product) : The total market value of all final goods and services produced within a country in a given year. GDP is measured in thousands of US dollars and reflects the overall economic perfor- mance of the country. GDP per Capita: This represents the average economic output per person and is calculated by dividing the GDP by the population of the country. It is measured in thousands of US dollars and provides insight into the standard of living or economic prosperity of individuals within each country. The dataset spans a period of 20 years, offering a detailed view of the economic growth and performance of these three major economies over time. This information will be useful for analysing trends in economic development and comparing the relative economic strength and growth between the countries. The data will also serve as the basis for various analyses, such as calculating growth rates, performing cross- country comparisons, and visualising trends using Excel charts and tables. 1. Creating a Plot (Single Series): Include the GDP data for the United States from 2004 to 2023 and label it as ”United States.” Step 1 Select the Data - Highlight the years (from cell A2 to A21) and the GDP data (from cell D2 to D20) for the United States. Tip: If you are selecting data from columns that are not next to each other, first select one column. Then, hold Control (Mac users: Use Command) and select the values in the other column. Step 2 Insert the Scatter Plot (X-Y Series) - Go to the Ribbon menu, and select Insert. - Under the Charts section, choose X Y Scatter. - Select the Scatter with Smooth Lines option to create the chart. Step 3 Add Series Label - Go to the Add Chart Element option in the top menu. - Select Legend → Right to display ”Series 1” on the chart. - Right-click on ”Series 1” and choose Select Data from the dropdown menu. - Click on ”Series 1” in the Select Data Source window, Click on ”Edit” button, then set the Series Name to cell C2, or manually type ”GDP” . (Mac users: Click on ”Series 1” in the Select Data Source window, then set the Series Name from the right side.) - Click ”ok” . Step 4 Add Chart Title and Axis Titles - Change Chart title to ”United States” - Vertical Axis: Thousands, $ - Horizontal Axis: Year 2. Adding Another Series: Include the GDP per capita for the United States from 2004 to 2023 - Right-click on the chart and select Select Data. - In the Select Data Source window, click Add (or +). - For Series Name, select cell C22 or enter ”USA GDP per Capita.” - For Series Values: X values: Select the year data from A22 to A41. Y values: Select the GDP per capita data range (from cell D22 to D41). - Click OK. The line representing GDP per capita for the United States appears flat, making it difficult to observe changes over time. This lack of visibility is due to the significant differences in scale between GDP and GDP per capita. To enhance the clarity of the graph, it is necessary to introduce a secondary vertical axis to effectively represent the scale of GDP per capita. 3. Use Two Scale Ratios - Right-click on the line that represents GDP per Capita and select Format Data Series. - In the Format Data Series pane, click on the Series Options icon (bar chart icon) and choose Sec- ondary Axis. 2 Tables 2.1 Introduction A well-constructed table in Excel is an essential tool for organising, analysing, and presenting data efficiently. Excel tables enhance data management by automatically formatting and structuring data in a way that makes it easier to sort, filter, and perform. calculations. A good table starts with clearly defined headers, ensuring that each column has a descriptive title to help users understand the type of data contained within. This makes it easier to interpret and manipulate the data, particularly when working with large datasets. One of the most valuable aspects of Excel tables is their ability to expand automatically as new data is added, which means any functions or formulas linked to the table will update dynamically. Excel tables also make filtering and sorting simpler. The built-in drop-down menus allow users to quickly find specific values, organisedata in ascending or descending order, or apply custom filters based on specific criteria. Another powerful feature is the use of structured references, which replace traditional cell references with column names, making formulas easier to read and less error-prone. Additionally, Excel tables offer automatic formatting options, making it easier to apply alternate row colours, highlight important data, or emphasise specific trends. Tables in Excel also support the use of total rows, which allow for automatic summarisation of data using functions such as SUM, AVERAGE, or COUNT, without requiring users to manually write formulas. Pivot tables, often used in conjunction with regular tables, offer an advanced way to analyse and summarise large datasets by organising and grouping data in various ways. Ultimately, Excel tables provide a structured, flexible, and dynamic platform. for data analysis, enabling users to efficiently manage and interpret their data. 2.2 Tasks Please click on the worksheet named Table, which contains the same data as the Chart worksheet. 1. Convert to table: (a) Highlight all the data in your worksheet. (b) Go to the Insert tab and click on Table. (c) In the dialog box, ensure that the option My table has headers is checked. (d) Click OK. (e) Once the table is created, click on any cell within the table to bring up the Table Design(or Table) tab in the Ribbon menu. You will see the default table name, which is Table 1. 2. Compare GDP and GDP per capita across countries in 2023: (a) To compare the data, sort and filter the table by the year 2023. (b) Select the table column headers for GDP and GDP per capita. 3. Calculate annual growth rates of GDP and GDP per capita: (a) Go to the Year header and click the down arrow button, then select ”Clear Filter.” (b) In a new column next to your data, create a new header titled ’Growth Rates.’ Excel will automatically incorporate this new variable into the table. (c) Starting from cell E3 (Column E, Row 3), enter a formula to calculate the annual growth rate for each country’s GDP. (We cannot calculate the growth rate for the year 2004 due to missing information, so the growth rate is calculated starting from 2005.) (d) Use the formula: Growth Rate = GDP in Year (t-1)/GDP in Year (t) − GDP in Year (t-1) (e) Once you press Enter, Excel will automatically apply the formula to all the cells in the column below, utilising the table’s dynamic capabilities. (f) Ensure that the value in the Growth Rates column for the year 2004 is changed to N/A, since the growth rate cannot be calculated for that year. 4. Add a Total Row to calculate the variance of growth rates across countries.: (a) To add a total row, click anywhere in the table and select Table Design(or Table) in the Ribbon. (b) Tick the box labeled Total Row. (c) The total row will automatically appear at the bottom of the table. (d) In the column showing the growth rates of GDP per capita, change the function in the total row to Variance. (e) Compare the variance of GDP per capita growth rates across the three countries. i. To analyse the United States, select the filter dropdown for GDP per capita from the Indicators column and choose United States. Then, scroll to the bottom of the list and select Variance by clicking the down arrow. ii. Repeat the process for the United Kingdom by filtering for United Kingdom and GDP per capita in the Indicators column, and select Variance from the dropdown menu. iii. Finally, filter for China and GDP per capita in the Indicators column column and choose Vari- ance from the dropdown list. 3 Pivot Tables 3.1 Basics of PivotTables Excel provides powerful tools for summarising, analysing, and visualising large datasets, Pivot Tables: A Pivot Table is a tool in Excel that allows you to summarise, analyse, and explore large data sets. You can quickly reorganise databased on categories, averages, or sums, and filter the information with- out altering the original dataset. Pivot Charts:A Pivot Chart is an extension of a Pivot Table that visualises the summarised data. They are dynamically linked to Pivot Tables, so any changes made in the Pivot Table (such as adding filters or changing rows/columns) will reflect in the Pivot Chart. Slicers: Slicers are visual filter buttons for Pivot Tables and Pivot Charts that allow you to filter data inter- actively. You can quickly adjust data displayed in your Pivot Tables and Pivot Charts by selecting options in slicers. Dashboards: A Dashboard combines multiple Pivot Tables, Charts, and Slicers into one cohesive display, allowing users to interact with the data and view real-time changes. 3.2 Tasks Data Description: The dataset contains graduate labor market statistics for England, including employ- ment rates, high-skilled employment, inactivity, and unemployment rates categorised by graduate type from 2007 to 2023. Table 1: Variables Description Variable name Variable description age group employment rate Age group 16 -64 and Age group 21-30 Employment Rate: Proportion of the popula- tion aged 16-64 who are in employment graduate type Postgraduates, Graduates and Non- hs employment rate graduates High-Skilled Employment Rate: Proportion of people who are employed in the following oc- cupations: managers, professionals, techni- cians and associate professionals. inactivity rate Inactivity Rate: Proportion of people aged 16- 64 who are economically inactive unemployment rate Unemployment Rate: Proportion of the eco- nomically active population aged 16 and over who are unemployed 1. Creating Pivot Tables (a) Create Pivot Table 1: Employment Rate by Graduate Type Go to Insert tab, Find PivotTable • Table/Range: Select all data values in the dataset. • Choose where to place the Pivot Table: New worksheet - Click OK • PivotTable Fields - Configure the following: – Rows: year – Columns: graduate type – Values: employment rate (Value Field Settings: Average) • Name this PivotTable ”Pivot1.” This Pivot Table will show the employment rate trends over the years for each graduate type. • Name this new worksheet as PivotTables Practice (b) Create Pivot Table 2: Employment Rate by Age Group • Table/Range: Select all data values in the dataset. • Choose where to place the Pivot Table: Existing Worksheet: PivotTables!$A$27- Click OK • PivotTable Fields - Configure the following: – Rows: year – Columns: age group – Values: employment rate (Value Field Settings: Average) – Filters: graduate type • Name this PivotTable ”Pivot2.”This table allows filtering employment rates by specific age groups. (c) Create Pivot Table 3: Employment, Inactivity, and Unemployment Rates by Graduate Type • Table/Range: Select all data values in the dataset. • Choose where to place the Pivot Table: Existing Worksheet: PivotTables!$A$50- Click OK • PivotTable Fields - Configure the following: – Rows: graduate type – Values: employment rate, unemployment rate, and inactivity rate (Value Field Set- tings: Average) • Name this PivotTable ”Pivot3.”This table compares employment, inactivity, and unemployment rates across graduate types over the years. 2. Designing Interactive Dashboard (a) Creating Pivot Charts i. Pivot Chart 1: Employment Rate by Graduate Type • Convert Pivot Table 1 into a Line Chart to visualise employment rate trends by graduate type. [Select Data - Insert tab - Choose Line Chart] ii. Pivot Chart 2: Employment Rate by Age Group • Convert Pivot Table 2 into a Clustered Column Chart to compare employment rates across age groups. [Select Data - Insert tab - Choose Column Chart - Select Clustered Column] iii. Pivot Chart 3: Employment, Inactivity, and Unemployment Rates by Graduate Type • Convert Pivot Table 3 into a Stacked Column Chart to compare these rates across gradu- ate types. [Select Data - Insert tab - Choose Column Chart - Select Stacked Column] (b) Adding Slicers • Click on any cell in the PivotTable, goto the PivotTable Analyse tab, and click on Insert Slicer. • Slicer 1: Add a slicer for year to filter data by specific years. Slicer tab - Report Connections - choose Pivot2 and Pivot3 • Slicer 2: Add a slicer for graduate type to filter by graduate type. Slicer tab - Report Connections - tick all PivotTables (c) (Homework) Designing the Dashboard Layout • Create a new worksheet, remove the gridlines, and insert a text box with the title ’Graduate Labour Market’ . Additionally, insert three text boxes, each with titles for the three charts.” • Left Side: Place the two slicers (for year and graduate type). • Middle Section: Add Pivot Chart 3 to display employment, inactivity, and unemployment rates. • Top Right Side: Add Pivot Chart 1 for employment rates by graduate type. • Bottom Right Side: Add Pivot Chart 2 to display employment rates by age group. • Adjust the colour of the text, remove the outline from the charts and text boxes. Personalise the dashboard based on individual preferences. (d) (Homework) Interact with the Dashboard Test the dashboard by adjusting the slicers. Select dif- ferent years and graduate types to see how the charts update in realtime. Please find the example included in the L3 worked example Excel file.
Financial computing Final exam Fall 2024 Question 1: The trinomial model for pricing options In class, we discussed the binomial model for pricing European options. Here, we extend this to the so-called trinomial model, where we assume that an asset can increase in price (“up-tick”), decrease in price (“down-tick”), or stay the same in two subsequent time steps. Specifically, we have the following cases: Additionally, we may derive that the probabilities of an “up-tick” (pu), a “down-tick” (pd), or staying the same (pm) as: (a) Show that if λ = 1, then the trinomial lattice and the binomial lattice models are identical. (b) Assume λ = √2. Implement in C++ the trinomial lattice. Assume that we divide our expiry time T into nsteps discrete time units (i.e., ∆t = nsteps/T). Use your code to calculate the call and put prices for a European style. option. You may use the code that we wrote in class and adapt it as needed. (c) Expectedly, your code will be very slow. Implement the memoized version of your trinomial lattice. Your answers in part (b) and (c) should be identical, but the memoized version should be able to run faster. NOTE: If you successfully code part (c), then you can omit part (b). That is, part (c) supersedes part (b). If you are unsure about your memoized version, then code both to secure partial credit. Good luck! Question 2: Pricing a barrier option using simulation in C++ In class, we talked a lot about the distribution of the price for a single asset/instrument, and agreed that it follows a log-normal distribution. We also discussed about continuous barrier options: specifically we want you to focus on “up-and-out” and “up-and-in” barriers here. (a) Assume that we know the starting price S0, the expected return r, the volatility σ, and the expiry time T of a single asset. We also have the option barrier B and the exercise (strike) price E. Create the necessary C++ code that simulates the path of that asset when the time T is discretized into nsteps time-steps (that is, the time steps are equal to ∆t = nsteps/T). To test your code, feel free to use nsteps = 30. (b) Use your code from part (a) to simulate more paths; specifically, create Nsims paths for the asset. Then, estimate the probability of hitting the barrier from below. Estimate this probability for changing values of the barrier from 1.5 · S0 to 6 · S0. (c) Finally, using simulation, price the continuous barrier call option (for the “up-and-out” and “up-and-in” cases). Plot the change in the price as the number of simulations changes from Nsims = 10 to Nsims = 1000 in increments of 10. Your code should produce two text files as an output that has the “up-and-out” call option price for each number of simulations in each line of the first file and the “up-and-in” call option for each number of simulations in each line of the second file. NOTE: As an example, feel free to use S0 = $1, and T = 5 years, with r = 0.08 and σ = 30%. However, please define these variables in the beginning of your code so that I can edit and change them at will! Thanks in advance. Question 3: Network optimization in pricing We have discussed the binomial model in class for pricing options. In this question, we will generalize this to more than two possible outcomes (“up-tick” and “down-tick”). We will also try to use a metric to quantify how prominent a specific price at a time is at predicting the final stock price. Specifically, assume that we have an asset with expiration date T that is decomposed into dis-crete time steps (like in the binomial model we saw in class) such as ∆t = nsteps/T. Assume that nsteps = 30 for the duration of the exercise. Now, for a starting asset price of S0, the next asset price at time ∆t will be S1, followed by S2, and so on. This is where it gets interesting: assume that the asset price is random with a probability dis-tribution written as vector π (t) . That is, at each time step t, the probability that the asset price moves by current price St = s to another price St+1 = s 0 is equal to p ( s,s t) 0 . Moreover, assume that the possible prices that an asset can take are already known and are equal to s1, s2, . . . , sm. Hence, an asset can move from S0 = s to some price sk at time t = 1, followed by price s` at time t = 2 with probability p (0) s,sk and then p (1) sk,s` . For simplicity, we assume that each move-ment is independent, and hence the probability of path S0 = s → sk → s` is equal to p (0) s,sk ·p (1) sk,s` . For example, if you have 6 time steps and m = 6 possible stock prices, then you should have the network of Figure 1. (a) Formulate a linear program (a mathematical formulation with only linear constraints) that identifies the most probable path from the start (the node at time t = 0 that has price S0) to a terminal node (any of the nodes in the last time step). Note that, by construction, every such path will have exactly nsteps − 1 arcs. (b) Based on your linear program in part (a), write a new linear program that identifies nscenarios most probable disjoint paths (as a whole). For the purposes of this exercise, as-sume that two paths are disjoint if they are not using any of the same edges. See Figure 2 for an example of three disjoint paths. NOTE 1: Your formulation (linear program) in part (b) should return a set of paths that cumulatively have the maximum probability. For example, assume that we have 5 paths with probabilities equal to 40%, 25%, 25%, 5%, 5%. Furthermore, assume that the first (most probable) path uses at least one edge that is also present in the second path; as well as another edge that is in the third path. Then, your linear formulation should return paths 2 and 3 as the top 2 most probable paths (with a cumulative probability of 50%), rather than path 1 and either of paths 4 or 5. Please let me know if you need more clarification on this point. NOTE 2: You do not need to write any code for this exercise. Figure 1: An example of the network of prices for 6 time steps. Notice that it is a complete m-partite graph, where every node in a previous time step can eventually lead to any node in the subsequent time step. Figure 2: An example of three disjoint paths. Note that we are allowed to visit the same node! However, we are not allowed to use the same arc for two paths or more.
PSTAT 173 - RISK THEORY FINAL EXAM DECEMBER 12, 2023 The exam begins at 8:00am Pacific on December 12, 2023. You have 150 minutes to complete the exam. Provide sufficient reasoning to back up your answer but do not write more than necessary. Show all work, do not just write an answer. For questions that use a calculator, make sure to write down the formula you are using that is typed into the calculator. Please make your final results easy to read. Problem 1. A claim size random variable X follows a loglogistic distribution. You are given the following information. • The 20th percentile of this claim size distribution is 350. • The 80th percentile of this claim size distribution is 1400. Determine P(X > 700). Problem 2. The dollar value of the total damagedone to a home due to a fire follows a Pareto distribution with α = 2.7 and θ = 45, 000. ABC Fire Insurance Company writes homeowner’s fire insurance policies. Each policy has the following coverage modifications. • A deductible of $500. • ABC Fire Insurance Company will pay 92% of the loss after the deductible is met. • ABC Fire Insurance Company will pay a maximum amount of $100, 000 per fire insurance policy. (i) (6 points) Compute the expected payment the ABC Fire Insurance Company will make per policy. (ii) (4 points) If the dollar value of the total damage done to a home due to a fire is inflated by 15% and all coverage modifications remain unchanged, compute the expected payment the ABC Fire Insurance Company will make per policy. Problem 3. You have been asked to build a compound frequency model of the usual form. S = M1 + M2 + ··· + MN where N has a negative binomial distribution with parameters r = 3 and β = 2 and M has the distribution P(M = 1) = 0.3, P(M = 2) = 0.4, , P(M = 4) = 0.3. Compute P(S ≤ 4). Problem 4. You are given the following information on the losses for a particular line of business. • For year 2002, loss sizes followed a uniform distribution between 0 and 5000. • In year 2002 the insurer pays 100% of all losses. • Inflation of 6% affects all losses uniformly from year 2002 to year 2003. • In year 2003 a deductible of 200 is applied to all losses. Compute the loss elimination ratio (LER) of the 200 deductible on year 2003 losses. Problem 5. You are given the following distributions for independent loss random variables X1 , X2 , and X3 . x f1(x) f2(x) f3(x) 10 0.25 0.2 0.4 20 0.75 0.2 0.6 30 – 0.6 – Compute the net stop-loss premium to cover the aggregate loss S = X1 + X2 + X3 for a deductible of d = 45. Problem 6. An insurance company has decided to establish its full-credibility re- quirements for an individual state rate filing. The full-credibility standard is to be set so that the observed total amount of claims underlying the rate filing would be within 5% of the true value with probability 0.95. The claim frequency follows a Poisson distribution and the loss severity distribution has pdf f(x) = kx, 0 ≤ x ≤ 200 for some constant k. (The density is 0 for x > 200). Determine the expected number of claims necessary to obtain full credibility using the normal approximation. Problem 7. In the population at large there are good and bad drivers. Good drivers make up 80% of the population and in one year have zero claims with probability 0.8, one claim with probability 0.15 and two claims with probability 0.05. Bad drivers make up the other 20% of the population and have zero, one or two claims with probabilities 0.3, 0.5 and 0.2 respectively. A certain driver has one claim in year 1 and two claims in year 2. Compute the expected number of claims for this driver for year 3 given this informa- tion.
Department of Electrical Engineering EE E4321. Problem Set #9. FinFET layout, Final project completion. Due: December 11, 2024, 5 PM EST by electronic submission (This part is to be completed individually) 1. In this problem, you will get familar with some of the capabilities, issues, and challenges with modern FinFET technology using the TSMC N16 enablement. (a) Using schematic simulation wiith the standard VT devices (pch svt mac and nch svt mac), estimate the fanout-of-four (FO4) delay for this technology. By running an appropriate simulation with the nFET device, determine the subthreshold slope in this technology. Use a supply voltage of VDD = 0.8 V. (b) Create a DRC- and LVS-clean layout of an inverter with four-finned nFET and pFET devices of minimum length. Please include taps to nwell and sub- strate in your layout. (This part is completed with your project partner.) 2. For the Design Project final submission, please submit your writeup as a single PDF file attachment submitted by only one person of your two-person team. Please note clearly in the document the name of the two team members in your group. To submit the layout, please stream out your design according to the instructions on the class website and attached to your submission as well. This assignment is completion of your final project and should be done with your project partner. This is the final push to complete the entire microprocessor core design. You need to complete three more (very simple) dataflow blocks, design the instruc- tion decoder, and assemble the final design (schematics and layout). You then need to verify the functionality of the entire design with Ultrasim from extracted layout (please use a capacitance-only extraction; wires are short enough that this should be sufficient). and determine your critical path timing with Spectre from extracted layout. In addition, you will want to verify the functionality of your design at clock speed with a (small) number of patterns in Spectre. Through simulation, calculate the average power dissipated by your core in running a “typical” code stream. Try to determine which opcode execution (and which data pattern for this opcode) gives the worst power (in general, this can be quite difficult to do for a complex processor!). There are three remaining dataflow blocks that you will need to design and layout. ● 8-bit level-sensitive latch. Use a gated-feedback, complementary-pass-gate design. You should have one cell layout that you can duplicate 8 times. You will want to “separate” the accumulator flip-flop and use the latch positions shown in the datapath diagram of Figure ??. As a result, three of these latches will be used in the datapath. ● 8-bit 3-to-1 multiplexer. You can implement this with a 3-nFET basic cell with four fully-decoded select lines (orthogonal). ● 8-bit bus driver. This is a tristate driver with an enable signal. Once again, a single cell can be duplicated 8 times. In addition to these dataflow blocks, you need to design the instruction decoder. You will implement this as a static psuedo-NMOS PLA with the inputs and outputs shown in Table 1. instr < 3 : 5 > will go directly to the memory as the address. Take advantage of espresso for logic minimization and make use of don’t cares to reduce the number of product terms required. Be sure to make a good pencil-and-paper floorplan before you assemble things in Virtuoso. Remember to make good use of layout hierarchy. You may assume that the instr < 0 : 5 > signal is arriving before the rising edge of phi1 (but after the rising edge of phi2), as if output from a phi2 active- Signal Direction Description instr < 0 : 2 > input opcode to decode subtract output subtract control for the adder mux cntl < 0 : 2 > output select lines for the 3-1 multiplexer drv enable output enable signal for the tristate bus driver mem write output write control for the memory mem read output read control for the memory shift bypass output shifter bypass load bus output load the internal bus externally store bus output load the internal bus to the external bus Table 1: Inputs and outputs of the instruction decoder PLA high latch. bus < 0 : 7 > has similar timing behavior. This means that the control signals going to the shifter, adder, and MUX must be latched by a phi1 latch. You will have to add this latch to the design. You should turn in the following: ● Waveforms that document at-speed operation of your core. ● Printouts of the key schematics of your core design. ● A short write-up that documents your implementation decisions, your power estimates, your floorplanning and layout planning, and your func- tional verification. ● Layout submitted electronically for evaluation.
ITEC 320 FINAL EXAM (PRACTICE) Part 1: Multiple Choice Questions 1. When applied to new data points, logistic regression provides a column in the RapidMiner output called “Confidence(1).” What does the number in that column tell us? A) The probability that the new data point is similar to what we’ve observed in the original dataset B) The probability that the outcome for the new data point will be 1 C) The accuracy of the logistic regression model D) The probability that logistic regression was the correct model 2. When comparing different predictive methods for numeric outcomes, how do we determine which is the most accurate? A) Select the method with the highest root mean squared error B) Select the method with the lowest root mean squared error C) Select the method with the highest classification accuracy D) Select the method with the lowest classification accuracy 3. A binary independent variable called SpecialOrder in a linear regression model for predicting ProcessingTime of orders (measured in days) has a coefficient of 3.36? What does that number mean? A) Each additional special order leads to an average increase in processing time of 3.36 days. B) The level of significance of SpecialOrder is 3.36. C) Special orders require an average of 3.36 days to process. D) On average, special orders have processing times that are 3.36 days longer than regular orders. 4. When trying to figure out what predictive method will work best, all of the following are benefits of using cross validation EXCEPT: A) Cross validation is often the best predictive method. B) Cross validation enables each method to produce the same accuracy or error metric. C) Cross validation provides measures of predictive accuracy rather than measures of fit. D) Cross validation helps prevent overfitting. 5. The table below shows the performance of a classification model on our dataset. What percentage of the model’s “1” predictions turned out to be correct? A) 74.88% B) 38.53% C) 23.86% D) 5.25% 6. Which operator in RapidMiner should be used to create a forecasting model for the time series shown in this line chart? A) Exponential Smoothing B) Apply Forecast C) Holt-Winters D) Decision Tree Part 2: Problems 1. (10 pts.) Why is it better to use a 5-period moving average to make predictions than it would be to either A) use the most recent value as your prediction, or B) use the average value for the whole time series as your prediction? 2. (10 pts.) The classification tree below is used to predict whether or not a charity’s request for donations by mail will be successful (indicated by a 1). The following independent variables are used: previous_donor: a binary variable equal to 1 if the person has given to this charity before, and 0 if not months_since_last_donation: for previous donors, the number of months since their last donation income: the average household income of the person’s neighborhood a) (5 pts.) Does the classification tree predict that the following person will donate? previous_donor = 1 months_since_last_donation = 6 income: = $127,500 b) (6 pts.) Briefly (1-2 sentences) explain the logic that this tree is using to make predictions. 3. (15 pts.) A publishing company is analyzing a dataset of its published books to try to figure out characteristics of a book that make it more or less likely to become a bestseller. They have run a logistic regression model using four of these attributes as independent variables, and obtained the following results (the dependent variable is 1 if the book was a bestseller, and 0 if it was not): a) (5 pts.) Which two of these attributes were significant? b) (5 pts.) If a book has lots of action verbs, what effect does that have on the estimated probability that the book will be a bestseller? c) (5 pts.) What does this logistic regression output tell us about the effect of the length of the book (in pages) on the probability that the book will be a bestseller? 4. (25 pts.) This problem is based on analysis of a dataset from a non-profit called Connect the Planet, which aims to develop infrastructure and help individual countries plan to improve their citizens’ internet access. They believe that the two primary factors associated with a country’s internet usage are its economic productivity (GDP per capita) and its adult literacy rate, and are trying to develop a predictive model to capture these relationships. The attribute being predicted is the country’s number of frequent internet users per 100 people. a) (5 pts.) The screenshot below shows the subprocess within the Cross Validation operator. Why are we getting an error? What needs to be done to fix it? b. (5 pts.) After fixing the issue from part a, we ran the process and got this result: What does that 13.626 number mean (conceptually, not mathematically), and what should we do with it? We have created the following linear regression model using this dataset, used in the next two questions: c. (5 pts.) What is the relationship between a country’s adult literacy rate and its number of frequent internet users per 100 people? d) (5 pts.) This regression model would predict that a country with a per capita GDP of $0 and an adult literacy rate of 0% would have -24.331 frequent internet users per 100 people. Why does it give us an obviously incorrect prediction? e) (5 pts.) RapidMiner’s linear regression output omits several pieces of information that we get when using Excel. Identify one such number, and explain what it means. 5. (15 pts.) This problem is based on a telecom company’s dataset containing all of its mobile plan customers from last month whose plans were due to expire at the end of the month. The dataset includes, for each customer, the monthly cost of the customer’s plan (in $), the total quantity of data the customer used last month (in GB), and a binary variable indicating whether or not the customer still has a mobile plan with the company (1=Yes, 0=No). If a customer still has a mobile plan with the company, it means that either they renewed their previous plan, or they changed to a different plan. The company would like to be able to predict more accurately which customers are likely to remain and which are likely to leave. a) (5 pts.) We ran cross validation using k-nearest neighbors with k=5, k=10, and k=30. The overall accuracies of the models were: k = 5: 68.04% k = 10: 72.33% k = 30: 73.46% Of these three models, which is best at predicting whether customers will stay? The company applied one of the models from the previous question to five customers whose plans are due to expires soon, and obtained the results shown below, used in parts b & c: b) (5 pts.) How many of the five customers does the model predict will stay? c) (5 pts.) A manager at the company believes that customers are likely to leave if they have low-cost plans and high data usage, because the company slows down these customers’ download speeds once their data usage exceeds a given threshold. Do the results from applying k-nearest neighbors to these five customers support the manager’s claim? Why or why not?
ECN 3620 Econometrics Fall 2024 Course Wrap Up Thank you for taking Econometrics with me this semester. I certainly enjoyed this class, and I hope you feel the same way. R Basic Import data Generate new variables Create graphs Get sample statistics Basic Statistics Sample distribution and population distribution Standard Normal distribution and t distribution Jarque-Bera test and related concepts Find corresponding probability and critical values from the Z table Value at Risk Central Limit Theorem and confidence interval Estimate vs Estimator Simple Linear Regression Coefficient related diagnostic: t test and p value Hypothesis test and confidence interval of coefficient R2, adjusted R2, and its components Standard Error of Estimate vs. Standard Error of Forecast Within sample and (pseudo) out-of-sample forecast MAE (Mean-Absolute-Error), RMSE (Root-Mean-Square-Error), MAPE (Mean-Absolute- Percentage-Error) Multiple Linear Regression Test linear combinations of parameters: e.g. H0 : -β1 = β2 or H0 : 2β1 = 3β2 +12 Joint significance test: F test Variable selection Dummy variables, interaction of dummy variables with other variables Residual related diagnostic: Homoscedasticity vs Heteroskedasticity Applications: hedonic pricing, seasonality and trend, Interrupted Time Series design (ITS) Special Topics and Models in Multiple Regression Omitted variable bias: the direction of omitted variable bias Multicollinearity: symptoms and remedy Models with low R2 Nonlinear models: LnY = a + b X; LnY = a + b LnX Probability models: linear, logit, and probit Probability models: odds and odds ratio (optional) Probability models: marginal effects, partial effects Causality Models Causality problems Interrupted Time Series (ITS): graphs, regressions, and interpretations Difference-in-Differences: graphs, tables, regressions and interpretations Time Series Models Components of time series data Lag function and difference function Mean stationary, first and second difference AR, MA, ARMA, and ARIMA models The following is a checklist of Econometrics modeling when you start your project: 1. Do you have relevant data for the question you are after? Do you have enough observations (at least 30 or so)? 2. If you have data, is there error in the data? You can check mean, maximum, minimum. Graph the data and see whether there are outliers. 3. Are you using the right unit of measurement? This is especially important when you are doing medical and healthcare research. 4. What types of data do you have? Time series, cross-sectional, panel, other? 5. If the data is cross-sectional or panel, you are most likely to choose a structural model, in which case, you should check: a. What independent variables should be included? Are you imposing a causality relationship? If so, is it valid? b. What functional form. are you employing? Linear or nonlinear? Why? c. Are the estimated coefficients consistent with theory or your expectations? If not, what can explain the difference? d. What is the model’s explanatory power? If it is low power, are the coefficients biased? Can you still use the parameters to forecast or make policy and business decisions? e. Is multicollinearity a problem? f. Does the error term satisfy homoscedasticity? Is there a serial correlation in the error term? 6. If the data is a time series, you are most likely to choose a time series model, in which case, you should check: a. Graph the data. Is it at least mean-stationary? Are the first difference, second difference, seasonal difference, or log transformation needed? b. After necessary conversion, what is the correlogram of the data? What does it tell you about low-order and high-order correlations? c. Use AIC or SIC to find the appropriate model. d. After comparing a series of test statistics and forecasting evaluations, fine-tune the model. e. Is the residual white noise? Conduct forecasting. 7. In some cases, you may have forecasts from the structural model, time series model, and judgment forecasting from the experts at the same time. Then, your best forecast will most likely be an average of the three. This is often called ensemble forecasting. Where can I get more resources: data, books and websites? One of the most asked questions is where I can get more resources such as data, books, or websites for more information on Econometrics. Here is a list of resources you may find helpful and interesting. Data Resources IPUM:https://ipums.org/ Integrated Public Use Microdata Series. IPUMS provides census and survey data from around the world integrated across time and space. IPUMS integration and documentation make it easy to study change, conduct comparative research, merge information across data types, and analyze individuals within family and community context. Data and services are available free of charge. ICPSR: (http://www.icpsr.umich.edu/icpsrweb/ICPSR/) Inter-University Consortium for Political and Social Research is an international consortium of about 700 academic institutions and research organizations. ICPSR maintains a data archive of more than 500,000 files of research in the social sciences. It hosts 16 specialized collections of data in education, aging, criminal justice, substance abuse, terrorism, and other fields. Current Population Survey:http://www.census.gov/cps/The Current Population Survey (CPS), sponsored jointly by the U.S. Census Bureau and the U.S. Bureau of Labor Statistics (BLS), is the primary source of labor force statistics for the population of the United States. The CPS is the source of numerous high-profile economic statistics, including the national unemployment rate, and provides data on a wide range of issues relating to employment and earnings. The CPS also collects extensive demographic data that complement and enhance our understanding of labor market conditions in the nation overall, among many different population groups, in the states and in substate areas. CRSP: (http://www.crsp.com/) provides monthly, quarterly, or annual updates of end-of-day and month-end prices on all listed NYSE, AMEX, and NASDAQ common stocks with basic market indices. Available on all Cutler workstations. WRDS: (http://wrds.wharton.upenn.edu/) Wharton Research Data Services (WRDS) is a web-based business data research service from The Wharton School at the University of Pennsylvania. It is known for its holdings of historical financial data from CRSP and COMPUSTAT. This data covers over 30,000 companies and includes security prices and trading volume, income and balance sheet items. WRDS also contains stock market indices, interest rates, mutual fund and executive compensation data, and a wide array of macroeconomic time series. Bureau of Labor Statistics, Bureau of Economic Analysis: (http://www.bls.gov/, http://www.bea.gov/) generally macroeconomic data such as employment rate, wage rate by region, consumer price index, GDP by region, Import and Export etc. Economagic: (https://fredaccount.stlouisfed.org/public/datalist/159?pageID=8) there are more than 200,000 time series for which data and custom charts can be retrieved. Though the greatest utility of this site is the vast number of economic time series, and the easily modified charts of that same data, an overlooked facility of great utility is the availability of Excel files for all series. The majority of the data is USA data. The core data sets involve US macroeconomic data (that is, for the whole US), but the bulk of the data is employment data by local area -- state, county, MSA, and many cities and towns. Economic Data – FRED: (http://research.stlouisfed.org/fred2/) Welcome to FRED® (Federal Reserve Economic Data), a database of 19,599 U.S. economic time series. With FRED® you can download data in Microsoft Excel and text formats and view charts of data series. US Census: (http://www.census.gov/) public resources from the US Census Bureau including population, economic, industry, and geography studies. The information can be accurate at zip code level. MEPS: (http://www.meps.ahrq.gov/mepsweb/) The Medical Expenditure Panel Survey (MEPS) is a set of large-scale surveys of families and individuals, their medical providers, and employers across the United States. MEPS is the most complete source of data on the cost and use of health care and health insurance coverage. NHANES: (http://www.cdc.gov/nchs/nhanes.htm) The National Health and Nutrition Examination Survey (NHANES) is a program of studies designed to assess the health and nutritional status of adults and children in the United States. The survey is unique in that it combines interviews and physical examinations. Pew Research Center: (http://people-press.org/dataarchive/) A collection of survey data from Pew Research Center For The People & The Press. Survey data are released five months after the reports are issued and are posted on the web as quickly as possible. Books Business Forecasting (5th edition) J. Holton Wilson and Barry Keating *Introductory Econometrics: a Modern Approach, by Jeffery Wooldridge (pre- bundled with the student version of Eviews). *A Guide to Modern Econometrics by Marno Verbeek Econometric Analysis (5th Edition) by William H. Greene Introduction to Econometrics by James H. Stock and Mark W. Watson Analysis of Financial Time Series by Ruey Tsay *Applied Econometric Times Series (3rd edition) by Walter Enders Introductory Econometrics for Finance by Chris Brooks *Stands for my personal favorite. Additional Resources on Using R If you want to learn R programming, the following are recommended readings. They are all freely available on the internet. • Forecasting: Principles and Practice, Rob Hyndman and George Athanasopoulos https://otexts.com/fpp3/ • Using R for Introductory Econometrics, by Florian Heiss https://www.urfie.net/ • Applied Econometrics Time Series, Walter Enders https://time-series.net/home • R for Data Science, Hadley Wickham and Garrett Grolemund https://r4ds.had.co.nz/ • UCLA R resources https://stats.oarc.ucla.edu/r/ • Econometrics Academy https://sites.google.com/site/econometricsacademy/ Websites UCLA Academic Technology Services: http://stats.idre.ucla.edu/ A website by the Institute for Digital Research and Education at UCLA. It has lectures, examples and videos on R, SAS, SPSS, and STATA. Econometrics Academy https://sites.google.com/site/econometricsacademy/home?authuser=0 The Econometrics Academy is a free online educational platform and non-profit organization. Its mission is to offer free education on Econometrics to anyone in the world. Using Python for Introductory Econometrics http://www.upfie.net/ This book introduces the popular, powerful and free programming language and software package Python with a focus on the implementation of standard tools and methods used in econometrics. Using R for Introductory Econometrics http://www.urfie.net/ This book introduces the popular, powerful and free programming language and software package R with a focus on the implementation of standard tools and methods used in econometrics. IBISWorld https://ezproxy.babson.edu/login?url=https://my.ibisworld.com Search by NAICS code or keyword to find thousands of U.S. industry research reports, includes Global Industry reports with some China coverage. The Economist:https://libguides.babson.edu/economist • The app and economist.com—distinctively distilled analysis • Digital newsletters—curated topical opinion • Audio version & podcasts—immersive listening • The digital archive—all our content since 1997 • Webinars and conferences—intelligent debate and informed analysis • Flagship franchises—The World in and 1843 magazine WSJ Economic Forecasting: http://online.wsj.com/public/page/economic-forecasting.html A collection of forecasting on US macro-economy including GDP, unemployment rate, housing, inflation. Forecasts are from various resources. Institute of Business Forecasting: www.ibf.orgoffers a variety of programs for business professionals and quarterly Journal of Business Forecasting: Methods & Systems a jargon-free journal on forecasts. Forecasting Principle: www.forecastingprinciples.com/ The Forecasting Principles site summarizes useful knowledge about forecasting so that it can be used by researchers, practitioners, and educators. It has link for researchers, practitioners and educators, and databases. Federal Forecasters Consortium: http://www.va.gov/HEALTHPOLICYPLANNING/FFC_2014.asp The Federal Forecasters Consortium is a collaborative effort of agencies in the United States Government, as well as other interested parties in the academic and not-for- profit communities, who share an interest in the practice, planning, and use of forecasting activities by and within the Federal Government. Science Direct: http://libguides.babson.edu/content.php?pid=17543&sid=1839426 Select Science Direct. You need log in using your Babson email and password. It is the world's largest electronic collection of science, technology, and medicine full-text. It has over 2,500 peer-reviewed journals and more than 11,000 books. There are currently more than 9.5 million articles/chapters, a content base that is growing at a rate of almost 0.5 million additions per year.
UFCXR-15-3 Autonomous Agents and Multiagent Systems 2024-2025 Section 1: Overview In short, you need to develop a Multi-agent system using a simulation environment. You need to submit diagrams, code, slides and video. See Assessment Specification for more details. Module learning outcomes assessed: 1. Apply agent-based analysis and design skills and techniques, appropriate to solving complex AI problems. 2. Identify situations where agent-based problem analysis, system design and programming paradigms are applicable and to create software that exploits them. 3. Appraise the concepts of multi-agent systems including cooperation, competition, learning and develop multi agent systems to solve complex problems. Submission Rules This assessed work must be submitted as follows: 1) Submission of Activities 1 to 4: • Commit and push the solution to YOUR OWN INDIVIDUAL GITHUB REPOSITORY THAT IS CREATED ONCE YOU ACCEPT THE ASSIGNMENT. There will be one Github assignment for each submission point and NO OTHER GITHUB REPOSITORIES WILL BE ACCEPTED. • Make sure your GitHub username is linked to your student username and ID. • For each submission you will create a file named solution.md in the root of your repository with a brief description of the steps needed to run your solution. Likewise you should include links to the solution of each activity included in the submission. No word count limit here, but this is expected to be brief but enough to run your solution and appreciate the work you have completed. • Python code should be organised within the src folder and should include a requirements .txt file indicating clearly the libraries used and the version. • For all programming activities, the code should be documented thoroughly. Implementation should be aligned to the design. • Every file that you use in your solution should be copied in the repository. No links to partial solutions copied or uploaded to external repos or sites. 2) Submission of Activity 5: • Uploading file via Blackboard submission point. See Deliverables expected for each activity. Section 2: Assignment Specification Robot-Assisted Farming In the last few years, robots have been used in agriculture. These are useful to improve efficiency in farming operations as robots can work tirelessly at different weather and terrain conditions. A recent example from A California Farm shows how picker robots are introduced to pick strawberries. While these robots have the ability to make some decisions a human operator is still required. For this coursework, assume the hardware for two types of next generation farming robots has been built with the following characteristics: Amphibious Picker Robots These are basic robots that are able to operate fully autonomously. These robots have an arm with ability to cut strawberries from trees as well as store and carry them. Once the robot’s storage is full they need to return to the base. These robots are able to move in rough farm terrain and also across watery surfaces (e.g. rivers) without compromising the payload. However, the speed of these robots gets limited when these are moving through water. Moreover, these robots can not move through trees so need to use existing lanes. Explorer Drones These are drones that are able to explore autonomously and send signals to other robots (e.g. to picker robots) about the location of the crops that need to be picked. These robots move in 6 directions: front, back, left, right, up and down. These drones fly over trees taking pictures of the crops, then computer vision algorithms are used to calculate the urgency for collection of the crops in a given area. Operation Modes Basic Mode • Exploring and movement around the farm according to each type of robot capabilities. • Drones locate crops to be collected and wait for the picker robots, once they get there, the drones continue exploring. • Amphibious picker robots move randomly to locate the crops to be collected, pick the crops until their storage is full, in that case they return to the base station to unload the crops collected and continue picking. • Robots have a battery whose energy is consumed as time progresses. Extended Operation • Communication of drone and picker robots to avoid random movement of picker robots: picker robots only leave the station once the drone send a signal with coordinates of where the crops need to be collected. Drone can send messages to individual robots or broadcast location of the crops to all the robots. • Crops age and grow as simulation progresses. Once an area has been fully picked new seeds are grown, re-starting the ageing cycle. Environment handles the ageing and re-growing of strawberries. Environment • The configuration of the environment, including the location of robots and all elements in the environment, can be random, in other words, robots are intended to work with any configuration of the same group of elements. • The number of robots of each type to be deployed for a particular mission can be pre-defined by operators. Assume you have been hired to develop the software for the next-generation of these robots. You are asked to complete the following activities: Activity 1: Agent System Development - Release 1.0 (up to 25% assignment marks) For this activity you will develop a multi-agent system that will operate in a simulated environment. In your system the robots work according to the basic mode of operation. Your simulation environment should look as presented below. While the environment representation might not be exactly the same of the figure, y our e nvironment m ust i nclude t he e lements p resent i n t his figure. As part of your development you need to identify the actions, states and transitions of the autonomous agents that will control the robots considering the available hardware features. Your design will define the way robots make decisions to trigger the required actions and then the implementation should follow this design. Testing should cover the unit tests of environment creation and key functions implemented for each agent. Figure 1: Environment Model Activity 2: Robot Farming Games (up to 10% assignment marks) You need to propose a game where 2 farming robots as the ones described in this specification are interacting and analyse their interactions using a reference game. Pick one of the common games studied in game theory (See list here ) and use it as reference to define a s ituation w here o nly t wo farming robots need to compete or cooperate. For example, a basic situation would be that the two robots are facing each other aiming to move in opposite directions, similar to the game of chicken discussed in the lectures. Your game should contain all the elements of a game as described in the course lectures. Activity 3: Agent System Development - Release 2.0 (up to 20% assignment marks) On top of your development of your Release 1.0 submission, you need to implement the extended mode of operation as specified a b ove. Y our s olution s hould i nclude a s witch f or t he u ser t o c hange b etween modes of operation. Likewise, you will need to evaluate the performance of the simulated environment with at least two relevant KPI (Key Performance Indicators) of your choice. Activity 4: Agent System Development - Release 2.1 (up to 20% assignment marks) On top of your development of your Release 2.0, you need to propose and implement the a novel mode of operation. Your solution should include a switch for the user to change between the previously implemented modes of operation and the novel model. The novel mode should be a new feature to add to your agent system on top of the functionality already implemented. The focus should be on the novelty and complexity of the new feature rather than on the quantity. The feature should be substantially different t o t hose a lready i mplemented a nd include application of concepts covered in the module sessions or researched. Examples of potentially new features are: • Agent learning, for example, agents using reinforcement learning or other form of learning to achieve environment goals. • New cooperation/competition approach among agents, for example based on the game proposed in Activity 2 • New decision model for a agents • New exploration approach for an agent • New type of agent relevant to the simulation with a distinct behaviour that need to interact with existing agents. Activity 5: Explanatory Video (25% assignment marks) Create a explanatory video of your solutions covering system’s functional aspects, evaluation of performance of robots and details of how you have designed and implemented your system, with emphasis to the features you found more challenging to implement. The rules for the video are as follow: • The length of the video should be MAX 10 min. At this point, the video will be stopped and additional contents, if any, will not be considered as part of the solution. • You are expected to use materials (diagrams, code, software and slides) developed in the previous activities. • If your solution includes concepts or approaches not covered in modules sessions you should give a brief introduction to those concepts/approaches before discussing how you have implemented those.
Module Code and Title IFB205TC Process Management School Title School of Intelligent Finance & Business Assignment Title Individual Assessment (100%) Assessment Tasks: Analytic Questions (60%) 1. Compare and contrast the strategies and supporting business processes of Spring Airlines and Air China. That is, do some research by yourself and compare their business strategy in terms of the product dimensions, targeted market segments, and their process architectures. (30 marks) (max. 500 words) 2. Given that your capability of business reception is limited, all customers register through an initial check-in process. Athis or her turn, each customer is seen by a consultant and then exits the process, either with a booking first or with admission to the service directly. Currently, 55 customers per hour arrive at your business, 10 percent of whom are admitted to the service directly. On average, 7 people are waiting to be registered and 34 are registered and waiting to see a consultant. The registration process takes, on average, 2 minutes per customer. Among customers who receive bookings, the average time spent with a consultant is 5 minutes. Among those admitted to the service directly, the average time is 30 minutes. A triage system has been proposed for the process described above. Under the proposed triage plan, entering customers will be registered as before. They will then be quickly served by an intern who will classify them as normal customers or VIP customers. While normal customers will move on to an area staffed for regular care, VIP customers will betaken to the VIP area. Planners anticipate that the initial classification will take 3 minutes. They expect that, on average, 20 customers will be waiting to register and 5 will be waiting to be seen by the intern. Recall that registration takes an average of 2 minutes per customer. The intern is expected to take an average of 1 minute per customer. Planners expect the regular area to have, on average, 15 customers waiting to be seen. As before, once a customer’s turn comes, each will take 5 minutes of a consultant’s time. The corporation anticipates that, on average, the VIP area will have only 1 customer waiting to be served. As before, once that customer’s turn comes, he or she will take 30 minutes of a consultant’s time. Assume that, as before, 90 percent of all customers are normal customers. Assume, too, that the intern is 100 percent accurate in making classifications. (Assume the process to be stable; that is, average inflow rate equals average outflow rate.) (30 marks) a. Under the proposed plan, how long, on average, will a customer spend in the business process? b. On average, how long will a Potential Admit spend in the business process? c. On average, how many customers will be in the business process? Simulation Reflection (40%) Students are required to write a reflective report based on three iterations of hands-on practices in class, it should be 1000+10% words with references excluded. Students will compose teams for running the business in class. For each iteration, pay attention to your approach adopted and the process performance, for instance, inventory, number of products assembled, material flow, quality, utilisation of resources, teamwork, and any other worthy of reflection. The analysis should evolve covering the following iterations: n Iteration 1: Make-to-Stock (Push) Scenario n Iteration 2: Make-to-Order (Pull) Scenario n Iteration 3: Customization Scenario To conclude, the reflective analysis should base on the simulation of: Issues identified in the iterations with relevant optimization approaches The reflection shall relate to the Topics of Lectures 7 & 9, ensure your discussion meets the requirements. 1. Submission: Deadline The penalty will be triggered with the late submission per University Policy. 2. Format Requirements: Maximum 1000 words for each analytic question. Assignments must include the cover page with your student ID Pls. name the file as IFB205TC-PM-YourID All assignments must be typed, proof-read, and professional in appearance; and saved as PDF only. The word font should be ‘Times New Roman’ or ‘Calibri Light ’ and font size 12 with 1.5 spacing. The left and right margins should be ‘Justify ’ Harvard Referencing if citation and references apply Implications of Content Grading 90-100% Extremely thorough and authoritative execution of the brief Evidence of significant independent research Reflective,perceptive, well structured, showing significant originality in ideas or argument Aptly focused and very well written, few areas for improvement Potentially worthy of publication 80-89% Thorough execution of the brief Well structured, clearly argued, signs of originality and/or independent critical analytical ability Supported by independent research Materials well utilized, well focused and well written Displays mastery of the subject matter 70-79% Good execution of the brief Well focused, knowledgeable Strong evidence of reading beyond the basic texts Displays a very good knowledge of the subject matter 60-69% Well structured and well focused with strong evidence of reading beyond the basic texts Thorough and comprehensive in approach Displays a good knowledge of the subject matter and where appropriate displays sound grasps of relevant theories and concepts Approach generally analytical 50-59% Competently structured, reasonably well focused and comprehensive, but tending to be descriptive in approach Limited evidence of reading beyond the basic texts 40-49% Tending to rely entirely on lecture materials Almost entirely descriptive in approach, limited knowledge and understanding of the subject matter displayed Partial and/or containing significant errors and/or irrelevancies, poorly structured 30-39% Inadequate execution of the brief Highly partial and/or containing serious errors Contents partly or substantially irrelevant, poorly structured Displays little knowledge of the subject matter 0-29% Seriously inadequate execution of the brief Failure to focus upon the question, seriously short or even devoid of theoretical under- pinning, large sections irrelevant
LA120 Fall 2024 Final Project A Topo Plan that Elevates What are GRADING PLAN GRAPHICS? “There are two basic types of grading plan. The first is the conceptual grading plan that communicates the design intent but is not usually an accurate or engineering representation of the ground form (Figure 1). The audience for this plan is normally the client, who maybe an individual, an architect, or a public agency, and its purpose is to make the proposed concept easily understandable. The second is the grading plan executed as part of a set of construction documents (Figure 2). The purpose of this plan is to interpret accurately the design intent and communicate this information effectively to the grading contractor. The plan, in conjunction with the technical specifications, must provide complete instructions concerning the nature and scope of the work to be performed as well as a solid basis for estimating the cost involved. The success of a project depends on the accuracy, completeness, and clarity of the construction drawings.” (Storm, 2012, P108, AKA your textbook) In this class, we have learned the essential skills of reading, calculating, and drawing topographic forms. In other words, we have learned how to grade. It is time to implement your thoughts and skills into your design works. Use the grading mind, topographic plans, and other thinking and representation skills to not only make the design more complete but also help you reflect on the design itself. This rational review draws your attention to the previously neglected that can improve your design and trigger innovative solutions that could then become the design. Grading is not an add-on feature of landscape architecture, it is not a something you can “figure out later” or “let the civil engineers do it”. GRADING IS DESIGN. Earth is always your most important media if not the only media, either covered or exposed, planted or remove. “Practical” is not always pleasant to the ears of the design students, who may (as I did) interpret it as banal or compromising. Building a livable space, or even just thinking about achieving a livable design on the human scale, is a considerable achievement and a basic responsibility of landscape architects at the same time. The grading training gives your design roots to the ground and brings it one step further into reality. To those guest critics who always say, “You can never build that in real life,” the grading will prove them wrong (or not 100% right). The goal of this practice is not to steal civil engineers’ jobs nor provide the most legible grading plan for site construction but simply generate a “Topo Plan” as a tool to represent your design’s topographic condition and its rational foundations to your proposed function and programs. Also, use it as a design method and integrate the grading into the design reflections. We aim at this process of acknowledging and reducing the discrepancy between great design and great practice. With the traditional two-dimensional media and the legacy of grading plans, we will consider preparing a more descriptive plan that penetrates the viewer’s pictorial perception with the topographic form. rather than a beautiful rendering with mere colors and shapes hanging on the wall. Site Selection: Pick one of your design works from the previous or current design studios that you want to develop further with more grading thinking. The site should ideally be smaller than 12,000 sqft to sustain enough human-level grading details. Therefore, it could be a small part of your design from a larger planning work that you can develop. You can not make up a project for the final project; you must own and know this project to a certain degree and be able to generate a spatial understanding with better resolution. We hope the process and the outcome can help you improve your design and even find ways to your portfolios or your studio's presentations. Scope and Representation: 1. Basic slope analysis on proposed surfaces. 2. Overall water drainage, show the diversion of water from significant built elements. 3. Pedestrians or vehicular accessibility. Consider ADA, stairs, ramps and parking design. 4. Existing and proposed contour lines or other representations of topographic change. 5. Cut and fill analysis, diagrammatic or statistic. 6. Tree protection/removing or siting building. 7. Other features you’s like to represent. Deliverables and Participations: 1. (60%) Drawings, a plan of the design area, and supplementary sections if needed. On Paper only. Even today, with all the modern spectacles and technologies, we shouldn’t stop investigating the two-dimensional platform. This has little to do with preserving the traditional media but more about the reception of information in the most direct way. Consider your drawing a tool or document one can fold/roll and carry around. The information is clear and concise as soon as it’s pulled out. No headset, no goggles, no model assemblage, no mouse dragging on one lap while standing with the other leg. Unfolding the paper means unfolding the topographic form. of the site. 2. (20%) A reflection note: How does the process of composing the Topo Plan change your design and thinking? I know you are probably reading this a week or a day before the due date to fulfill the requirement,but this page is not for me. If the course is too practical and boring compared to the other courses offered by the department, this reflection is a perfect chance to help you synthesize your beautiful design concepts with actual human (or whatever you are designing for) experience. Is the process a compromise or an improvement? Why does it (or not) change your design? What should betaken into consideration in the future? This reflection offers you an embodied perception of your own design and design thinking; please spend enough time thinking and drafting. 3. (20%) The lecture series and workshop (or field trip) In the last 4 weeks of semesters, the three weeks of final project workshops will offer a series of lectures and desk critique opportunities. Attendance is mandatory. Schedule: The teaching team has to approve your Site Selection by November 1th November 15, WEEK 12: Final Project Workshop November 22, WEEK 13: Final Project Workshop November 29, WEEK 14: THANKSGIVING BREAK (NO CLASS) December 6, WEEK 15: Final Project Workshop December 13, WEEK 16: RRR Week. Final Project Due.
CEN405 – Sustainable Drainage Systems ASSIGNMENT TITLE: Problem Solving Practice (Final) Aims: To enhance students’ understanding on how the fundamental principles and concepts of Sustainable Drainage Systems are applied in the analysis and solving of some practical problems related to water resources engineering and drainage engineering. Requirement: Kindly clearly write your student ID on your submission At the end of this assignment, a student should be able to (Learning Outcomes Assessed: A, B, C,D,E, F): Recommended Reading: Refer to Lecture Notes/Tutorials on Learning Mall Online A. On the basis of firm understanding of advanced hydraulic and hydrological concepts, to predict the frequency and magnitude of extreme storm events for the design and optimization of urban drainage systems. B. Assess the functionality, application and selection of Sustainable Urban Drainage Systems (SUDS) and appreciate the significance of water re-usage and utilization. C. Appreciate the impacts of flooding, estimate flood risks and apply flood estimation procedures for flood protection and control. D. Design permeable pavements and appraise the importance of road drainage. E. Appraise the importance of consulting the local and international regulations and directives (Environmental Agencies and Local Authorities). F. Appreciate the significance of sustainability in water resources engineering and sustainable urban drainage systems. Note: For calculation questions (questions 1 and 2): The values of A, B, and C should be obtained from your own student ID number. For example, if the last three digits of your own student ID number end with 123, then take A as 1, B as 2, and C as 3. For questions on review and practical ideas (questions 3 and 4): The review and your practical ideas should be supported by at least three recent journal articles (i.e. journal articles published from 2014 to 2024). For review, you should review the most important points in those journal articles and describe the essence and relevance of the selected journal articles. For practical ideas, kindly note that you can refer to these same journal articles but they are meant to be supporting information and you should be focusing on providing your own in-depth analysis of the scenario and giving practical ideas of solution. As a guide, the total length of the review and your practical ideas can be around 3 pages. The references for the journal articles should be cited as in-text citations therein all your answers and the complete references to be given in APA style at the end of your answer. Question 1 [25 marks] a) The following dataset shows the observed times between rainfall events at a given location, i.e. x = {5.A, 7.B, 10.C, 9.A, 6.B, 8.C, 11.A, 13.B, 9.C, 4.A, 8.B, 9.C, 10.A, 8.B, 5.C, 6.A, 9.B, 12.C, 14.A, 6.B, 8.C, 7.A, 11.B, 10.C, 12.A, 9.B, 10.C, 5.A, 6.B, 8.C, 7.A, 9.B, 11.C, 12.A, 15.B, 11.C, 8.A, 10.B, 9.C, 4.A, 6.B, 5.C, 9.A, 11.B, 12.C, 5.A, 7.B, 10.C, 9.A, 8.B, 5.C, 7.A, 11.B, 12.C} in days. Using the methods of moments and the given data, determine the parameter θ of the probability density function (pdf) f(x) = θcos(x) for 0 ≤ x ≤ π . [10 marks] b) It has been hypothesized that the observed dataset in (a) follow an exponential distribution f(x) = λe−λx for x ≥ 0, where the parameter λ = x/1, where ̅(x) is the average value from the dataset. Perform. a χ2-test to examine the goodness of the fit for a significance level of 10%. [10 marks] c) According to the answer in (a), provide a brief qualitative discussion on the probability density function to determine the flood frequency analysis. [5 marks] Question 2 [25 marks] a) The record of the annual peak floods of a stream is given as follows: X = {10A.B, 9B.C, 8A.B, 9B.C, 11A.B, 12B.C, 14A.B, 15B.C, 13A.B, 9B.C, 11A.B, 10B.C, 8A.B, 12B.C, 15A.B, 11B.C, 10A.B, 9B.C, 8A.B, 15B.C, 12A.B, 13B.C, 15A.B, 10B.C, 9A.B, 10B.C, 11A.B, 14B.C, 12A.B, 13B.C, 9A.B., 12B.C} m3/s. The data have been hypothesized to follow a Gumbel distribution. Determine the return period of the event that the annual peak flood on the river exceeds 120 m3/s. [10 marks] b) Using the same dataset in (a), determine the probability of the annual peak flood being less than or equal to 115 m3/s. Also, calculate the probability of this event occurring at least 3 times in the next 4 years. [10 marks] c) According to the answer in (a), provide a brief qualitative discussion on the Gumbel distribution to determine the flood frequency analysis. [5 marks] Question 3 [25 marks] a) Global warming has resulted in many detrimental environmental implications and one of the consequences is increased frequency of urban flooding. The current task is to investigate the impacts of global warming on urban flooding by using a case study in one country (this country can be any country in the world). You will need to review the recent developments on this issue (e.g. the occurrence of urban flooding due to global warming and the deleterious harms that the global warming has resulted in relation to urban flooding) and then elucidate your own suggestions of practical ideas on the following aspects: i) How to improve the current design of sustainable urban drainage systems (SUDS) so that it can address the effects of urban flooding as resulted by global warming, and ii) How to develop more effective prediction models in relation to monitoring the phenomenon of global warming so that the flooding impacts can be minimized. [12 marks] b) There are many low impact development (LID) practices which can be adopted in order to address the urban flooding. Among these approaches, permeable pavement is one of the commonly adopted technique and its efficiency has been proven to mitigate the impacts of urban flooding. In an attempt to achieve circular economy, it is noteworthy that utilizing waste tires in permeable pavements is one desirable technique. The current task is to investigate the adoption of permeable pavements which employ waste tires in its design by using a case study in one country (this country can be any country in the world). You will need to review the recent developments on this issue (e.g. the improved characteristics of permeable pavements which utilize waste tire and their performances in mitigating the urban flooding) and then give your own suggestions of practical ideas on the following aspects: i) How to assess the environmental impacts of the permeable pavement which utilize the waste tire, e.g. using life cycle assessment method, and ii) How to evaluate the cost savings associated with the permeable pavement which utilize waste tire, e.g. using life cycle costing method. [13 marks] Question 4 [25 marks] a) An effective flood management system is highly crucial to protect the human lives and minimize the damage on property. The current task is to investigate the legislations related to flood management system by using a case study in one country (this country can be any country in the world). You will need to review the recent developments on this issue (e.g. the specific rules and regulations being established to deal with the flooding and the major implications of these legislations), and then elucidate your own suggestions of practical ideas on the following aspects: i) How to establish new legislations in order to meet the needs of today’s world, and ii) How to improve the process of response and recovery to better manage the flooding situation. [12 marks] b) Sustainable urban drainage systems (SUDS) has been proven to be effective to minimize the deleterious impacts of urban flooding. In today’s globalized world, artificial intelligence (AI) has been widely employed in many applications. The current task is to investigate the implementation of AI in SUDS by using a case study in one country (this country can be any country in the world). You will need to review the recent developments on this issue (e.g. the types of AI and the general working principles of the AI) and then elucidate your own suggestions of practical ideas on the following aspects: i) How to decide on an appropriate AI method to be implemented in SUDS, and ii) How to overcome the current limitations associated with the implementation of AI in SUDS. [13 marks]
Objective: Students will query the Northwind database (already provided on canvas – Northwind_DB.sql) using SQL to answer analytical questions. The resulting tables will be exported to Tableau for visualization. Instructions 1. Database Setup: o Import the provided Northwind database schema into your MySQL environment. 2. SQL Queries: o Use SQL to answer the following analytical questions. Save your queries and export the resulting tables to .csv format. 3. Tableau Visualization: o Import the .csv files into Tableau. o Create appropriate visualizations to interpret and present your findings for each question. o Create a dashboard to present your charts. o Save your Tableau workbook and submit it along with the .csv files. 4. Submission Requirements: o SQL scripts for each query. o Exported .csv files for each question. o Tableau workbook with your visualizations. Questions 1. Sales Analysis by Country Write a query to calculate the total sales (Quantity × Price) for each country. 2. Top Customers by Revenue Identify the top 10 customers by total revenue (Quantity × Price). 3. Monthly Sales Trends Analyze monthly sales trends by aggregating total sales (Quantity × Price) for each month. 4. Best-Selling Products Find the top 5 products with the highest quantity sold. 5. Employee Sales Contribution Calculate the total sales contributed by each employee. Deliverables 1. SQL scripts for all five questions. (1 .sql file in which you will write answers to all 5 questions). 2. A Tableau workbook (.twbx file). To export the result tables from SQL 1. Right-click anywhere in the Results Grid. 2. Select Export Results from the context menu. 3. In the Save File dialog box: · Choose a location to save the file. · Name the file (e.g., sales_by_country.csv). · Ensure the file type is set to .csv (Comma-Separated Values). · Click Save.
COMP10001 Foundations of Computing Final Exam, Semester 1 2023 Instructions to Students: This exam contains 100 marks, and counts for 50% of your final grade. Be sure to write your student number clearly in all places where it is requested. This assessment is closed book, and you may not make use of any printed, written, electronic, or online resources. All questions should be answered in the spaces provided on the exam paper. There is also an overflow page available at the end of the paper. If you write any answers there, be sure to also write the corresponding question number. There are a number of places where you are asked for a single Python statement, but two or more answer lines are provided. In these cases you may draft your answer in one line and then copy it neatly to another line within the answer boxes if you wish, but if you do so you must then cross out the first draft answer. You must not communicate with any other student in any way from the moment you enter the exam venue until after you have left the exam venue. All phones and other network, communication, and electronic devices must be switched completely off while you are in the exam room. All material that is submitted as part of this assessment must be completely your own unassisted work, and undertaken during the time period allocated to the assessment. Calculators and dictionaries are not permitted. In your answers you may make use of built-in functions, but may not import functions from any other libraries. You may use any blank pages to prepare drafts of answers, but you must copy those answer onto the correct answer box before the end of the exam. Question 1 (20 marks) Each subquestion is worth 2 marks. To get full marks you must specify both the correct value and be clear in your answer that you understand what the type will be. If you think an expression is not legal according to the rules of Python, you should write “Error” in the box. If you want to include any blanks in output strings, draw one symbol for each blank character. (a) 2 + 3 * 4 // 5 (b) 4 - 12 % 5 * 3 (c) [1, 3, 4] * 3 (d) "a" + "fine" + "day" (e) "sunshine"[2:5] (f) (1, 2, 5, 4, 8, 7)[2:][1] (g) sorted("raining") (h) [x/2 for x in range(1,5)] (i) set("apples") - set("peas") (j) (False,) * (True + True) Question 2 (20 marks) Each subquestion is worth 4 marks. There are two lines provided for each answer, but you should give a single Python assignment statement if you can. If your answer requires more than one assignment statement you will be eligible for partial marks. (a) Suppose that vals is a Python list. Give a Python assignment statement that assigns True to even size if vals has an even number of items in it, and assigns False if not. (b) Suppose that vals is a Python list of numbers. Give a Python assignment statement that assigns True to all equal if all of the values in vals are the same, and assigns False if not. (c) Suppose that n is a positive integer. Give a Python assignment statement that creates a list list of tup containing n tuples, with each tuple containing n values all of which are zeros. (d) Suppose that text is a Python string. Give a Python assignment statement that assigns the number of digit characters in text to the variable n digits. (e) Suppose that nums is a Python list of numbers. Give a Python assignment statement that cre- ates a new version of nums in which 1 has been added to the first element in nums, 2 has been added to the second element, 3 to the third element, and so on through the remaining elements. Question 3 (15 marks) The following function reformats text so that input lines of any length are formed into a paragraph in which all of the lines are of roughly equal length, except for the last one. def formatter(lines, linelen=DEF_LINE_LEN): # 01 ’’’ Takes input ‘lines‘ and restructures their words into output lines that are as long as possible without exceeding ‘linelen‘, creating a left-justified formatted paragraph. Adjacent whitespaces are replaced by single blanks; output lines start/end with non-blanks; and too-long-words are placed on lines by themselves and allowed to break the right margin. ’’’ # first break the input into a sequence of words XXXXXXXXXX # 02 for line in lines: # 03 XXXXXXXXXX # 04 all_words += words # 05 # then build the output lines out_lines = [] # 06 out_line = "" # 07 XXXXXXXXXX # 08 # try and fit the next word XXXXXXXXXX # 09 # can’t place next word, so need to start a new line out_lines.append(out_line) # 10 out_line = "" # 11 # now know we should have space in out_line if len(out_line) > 0: # 12 out_line += " " # 13 out_line += word # 14 # might still have a partial line XXXXXXXXXX # 15 out_lines.append(out_line) # 16 # all done, time to finish up return out_lines # 17 There are five locations in the function where one line of Python code has been replaced by XXXX symbols. The number of X’s should not be used as an indication of the length of the text that has been covered up. In the boxes below, write the Python statement that has been covered up at each of the named locations. Each subquestion is worth 3 marks. (a) What has been covered up at line 02? (b) What has been covered up at line 04? (c) What has been covered up at line 08? (d) What has been covered up at line 09? (e) What has been covered up at line 15? Question 4 (15 marks) Consider the following Python function, which is designed to find the most frequently occurring vowel in some given string text. There are five vowels in English: a, e, i, o, and u. def most_freq_vowel(text): # 01 ’’’ Return the most frequently occurring vowel in ‘text‘ in a case insensitive manner (that is, ’a’ should be counted the same as ’A’). If there is a tie, return the vowel that appears first in the English alphabet, and if there are no vowels at all, return None. ’’’ vowels = "eeiou" # 02 counts = {} # 03 for char in text: # 04 if char in vowels: # 05 counts[char] += 1 # 06 if len(counts) != 0: # 07 return None # 08 max_count = max(counts.values()) # 09 for vowel in vowels: # 10 if counts[vowel] > 0 and counts[vowel] == max_count: # 11 return vowel # 12 return None # 13 Unfortunately, the function contains a number of errors. The subquestions on the next two pages are designed to help you first identify those errors, and then correct them. (a) [4 marks] Trace the computation that is performed for the function call most_freq_vowel("hello") to determine what is returned. Show both the value and type in the answer that you provide. Or, if you think an exception will be raised before the function returns, indicate in English what error will arise (for example, “divide by zero”, or “index out of range”, or “invalid method for type int”, and so on). Now write the output that should have been returned for that input if the function was working correctly. Finally, identify a minimal fix for this error, by giving exactly one line number in the code that is to change, and then providing one replacement line that corrects the error. (b) [4 marks] Next, trace the function call most_freq_vowel("fly") to determine what is returned (or, if the function does not return, describe the error that occurs), providing the same details required in part (a). Now write the output that should have been returned for that input if the function was working correctly. Finally, identify a minimal fix for this error, by giving exactly one line number in the code that is to change, and then providing one replacement line that corrects the error. (c) [4 marks] There are two further errors present in the function that are not revealed by the tests shown in parts (a) and (b) of this question. In answering this part of the question, you should assume that the errors already identified in parts (a) and (b) have been fixed. Write a function call that would expose one of these other two errors (you may choose either). Describe what the result of that function call should be, and what it would actually be as a result of this error. Finally, identify a minimal fix for this error, by giving exactly one line number in the code that is to change, and then providing one replacement line that corrects the error. (d) [3 marks] Now locate the other error in the function, and briefly describe it in one sentence. You do not need to give a function call that would expose this error. Now identify a minimal fix for this error, by giving exactly one line number in the code that is to change, and then providing one replacement line that corrects the error. Question 5 (30 marks) Recall the Matching Game from Assignment 1, which featured a number of colored pieces on a two-dimensional board, with the board represented in Python by a list of lists. Each piece was represented as a string containing a single upper-case character between ’A’ and ’Y’ . The character ’Z’ was used to indicate a blank position on the board. For example, a board might have four columns and three rows and look like this: board = [[’B’, ’G’, ’B’, ’Y’], [’G’, ’B’, ’Y’, ’Y’], [’G’, ’G’, ’Y’, ’Z’]] In this question you will implement a number of “powerups” – functions that manipulate the board in ways that would not normally be permitted by the rules of the game. You are not required to include comments or docstrings in your functions, but may if you wish. (a) [6 marks] Write a Python function row_destroyer(board, row) that takes two parameters: board, a list of lists representing a game board; and row, the index position of a row on the board. You may assume that row has a valid value for the given board. Your function should alter the board so that all of the pieces in the row specified by row are replaced by blanks (that is, the character ’Z’). For example: >>> board = [[’B’, ’G’, ’B’], [’G’, ’B’, ’Y’], [’G’, ’G’, ’Y’]] >>> row_destroyer(board, 1) >>> print(board) [[’B’, ’G’, ’B’], [’Z’, ’Z’, ’Z’], [’G’, ’G’, ’Y’]] (b) [8 marks] Now write a Python function piece_destroyer(board, piece) that takes two parameters: board, a list of lists representing a game board; and piece, a single- character string representing a piece. Your function should alter the board so that all pieces that have the same color as piece are replaced by blanks (that is, the character ’Z’). For example: >>> board = [[’B’, ’G’, ’B’], [’G’, ’B’, ’Y’], [’G’, ’G’, ’Y’]] >>> piece_destroyer(board, ’B’) >>> print(board) [[’Z’, ’G’, ’Z’], [’G’, ’Z’, ’Y’], [’G’, ’G’, ’Y’]] (c) [8 marks] Now write a Python function put_in_order(board) that takes one parameter: board, a list of lists representing a game board. Your function should rearrange the pieces on the board so that they are placed in alphabetically sorted order starting from the lowest index position. Here are two possible execution sequences: >>> board = [[’A’, ’C’, ’E’], [’D’, ’F’, ’H’], [’B’, ’G’, ’I’]] >>> put_in_order(board) >>> print(board) [[’A’, ’B’, ’C’], [’D’, ’E’, ’F’], [’G’, ’H’, ’I’]] >>> board = [[’B’, ’G’, ’B’], [’G’, ’B’, ’Y’], [’G’, ’G’, ’Y’]] >>> put_in_order(board) >>> print(board) [[’B’, ’B’, ’B’], [’G’, ’G’, ’G’], [’G’, ’Y’, ’Y’]] (d) [8 marks] Finally, write a Python function connected_destroyer(board, row, col) that takes three parameters: board, a list of lists representing a game board; row, an index value for a row in the board; and col, an index value for a column in the board. You may assume that row and col have valid values for the given board. Your function should modify the board such that the piece at row, col is replaced by a blank (’Z’). Furthermore, any piece that is immediately above, below, left or right of a piece that was replaced and has the same color as the piece that was replaced should also be replaced by a blank. This process should continue until no more replacements are possible. Here are three possible execution sequences: >>> board = [[’B’, ’B’, ’A’], [’B’, ’A’, ’A’], [’A’, ’A’, ’B’]] >>> connected_destroyer(board, 0, 0) >>> print(board) [[’Z’, ’Z’, ’A’], [’Z’, ’A’, ’A’], [’A’, ’A’, ’B’]] >>> board = [[’B’, ’B’, ’A’], [’B’, ’B’, ’A’], [’A’, ’A’, ’B’]] >>> connected_destroyer(board, 1, 1) >>> print(board) [[’Z’, ’Z’, ’A’], [’Z’, ’Z’, ’A’], [’A’, ’A’, ’B’]] >>> board = [[’B’, ’B’, ’B’], [’B’, ’A’, ’B’], [’B’, ’B’, ’A’]] >>> connected_destroyer(board, 0, 0) >>> print(board) [[’Z’, ’Z’, ’Z’], [’Z’, ’A’, ’Z’], [’Z’, ’Z’, ’A’]]
Department of Mechanical. Materials and Manufacturing Engineering, Faculty of Science and Engineering LEVEL: 3 MODULE: Computer Modelling Techniques (MM3CMT or MMME3026 and AERO3009) ASSIGNMENT: Computer Modelling Techniques – Coursework-Part III: FEA-2D Modelling ISSUE DATE: 26th November 2024 SUBMISSION DATE: 4 pm, Friday 15th December 2024 Computer Modelling Techniques Coursework – Part III: FEA-2D Modelling Note: The whole coursework (consisting of parts I, II and III) is equivalent to 30% of the final mark of the module assessment. This coursework, part III- FEA-2D modelling will contribute 9% to the final mark. Question 1. [Total: 50 marks] To be completed individually A long square cross-section steel beam, with a width of w m for each side, contains a uniformly distributed internal heat source of G kW/m3. The beam is subjected to a constant temperature of 25°C on its outer surfaces and the steel material that the beam is made from has a constant thermal conductivity, k, of 50 W/m.K. Assume the heat flux density conducted through the steel beam surface heat insulator to be q=40 kW/m2. Figure Q1. A steel beam with a square cross-section and uniform. internal heat source. Each student is allocated a different value for the beam width, w, and internal heat source, G, which can be found in “MMME3026 AERO3009 CW Part III Student Data 2024- 2025.xlsx” on MMME3026/AERO3009 Moodle. Using the symmetries about the centre of the square cross-section, the problem can be reduced and resolved using only a single right-angle triangle, as shown in Figure Q1. Solve the problem using this element by following the steps given in sub-questions below. (a) The shape functions for this triangular element can be given as: Find the kinematic matrix, [B], given: [20 Marks] (b) Find the element stiffness matrix [K], where: [10 Marks] (c) Find the forcing vector {F}, where: [10 Marks] (d) Due to the symmetry of beam, the square cross-section is modelled by a single element and its stiffness matrix is given by that of the element. Construct the structural stiffness equation. [5 Marks] (e) Solve the stiffness equation in order to calculate the temperature at the central node. [5 Marks] Question 2. [Total: 50 marks] To be completed in groups A thin notched plate, with dimensions shown in Figure Q2, is tested under uniaxial tension with a load of σ0 MPa. This plate is made from a steel material with Young’s modulus E = 210 GPa and Poisson’s ratio ν = 0.25. Unit thickness is assumed. Respond to the sub questions below in order to resolve the displacements and stresses in this 2D continuum using the ANSYS Mechanical APDL software. Each group is allocated a different value for the load, σ0, which can be found in “MMME3026 AERO3009 CW Part II Student Data 2024-2025.xlsx” on MMME3026/AERO3009 Moodle. When working in groups, if you have any problems with your groupmates (ie. they are not contributing to the coursework or not responding to your emails) you must contact the lecturer well before the submission due date. Figure Q2. A notched plate beam with a square cross-section and uniform. internal heat source. (a) First, identify any possible simplification of the model via symmetry, and re-draw the simplified geometry. Include details of the load application and boundary conditions. You may need to add a constraint at a single node to resist possible rigid body motion in the x-direction. [4 marks] (b) Note what type of element you will use for the model, and why. [2 marks] (c) Before reporting any results, confirm the validity of your mesh by performing a mesh convergence study for the y-displacement at point B: · Report the results of this study using a graph with “y-displacement at B” for the dependent axis, against the “number of elements” in the mesh, for the independent axis. · Plot the results for at least 4 different meshes. · Also write a sentence or two explaining how many elements are needed for a suitable mesh… in this case, the desired accuracy is within 0.1% for the y-displacement results (you can calculate this accuracy by comparing results from different meshes to see if the change in y-displacement is less than 0.1% when the number of elements is increased). (Note: the number of elements in your mesh is displayed in the “Output Window” when you mesh the geometry. It is recommended that you start from a very coarse mesh to best demonstrate this convergence behaviour.) [20 marks] (d) Now, with greater confidence in the quality of the mesh, please report the following values as predicted by the FE model: · x- and y-displacements at points A, B and C · Maximum normal stress in the x-direction, including the location of this stress · Maximum normal stress in the y-direction, including the location of this stress (Note: you may wish to use images to show the location of these stresses, which can be saved through “PlotCrtls > Capture Image…”) [16 marks] (e) Lastly, use a “path” to graph the change in y stresses between points D and E. This should have y stress as the dependent variable against the distance from D on the independent axis. [8 marks]