Project Description This project is to develop a tool for creating text-based adventure games, commonly referred to as Interactive Fiction. Through a user-friendly interface this tool will allow users to design rich, interactive worlds by creating maps, objects, and NPCs (non-playable characters) with unique behaviors and dialogue trees. Additionally, the tool may offer multiplayer support, enabling multiple players to explore the game world in real time, along with potential integration of AI-driven dialogue and descriptions to enhance storytelling. Aims and Objectives Aims Develop a tool that enables users to create their own text-based adventure games, Provide a user interface allows users to easily design and manage various game components Create a system to design NPCs with custom descriptions, interactions, and behaviors. Generate dynamic descriptions and dialogues for the game. Support multiplayer gameplay. Objectives Design and develop the user interface, enabling users to intuitively create game maps, define items, design NPCs, and set up game logic events. Implement a map creation feature, allowing users to design grid-based or node-based rooms, define exits, and add interactive elements like puzzles and doors. Create an item creation module, enabling users to define item properties, descriptions, and interactions with other game elements. Implement a game event and logic system, allowing users to define triggers and conditions for in - game events, such as using a key to unlock a door or starting a quest. Implement multiplayer functionality, allowing multiple players to connect and interact in the same game world in real-time. Integrate with a Large Language Model (LLM) to generate dynamic dialogues and descriptions for the game. Key Literature & Background Reading Personal Background To start considering the development of the tool of text-based adventure games, I first needed an understanding of the content and structure of text-based games, which I gained during my second-year C++ course. At that time, I used C++ to develop a battle system for a Console-Based Adventure Game and added a map, enemies, and items as well. The experience from this assignment will help me better understand this project. Additionally, as I have experience using the Unity engine, and Unity itself offers many convenient features, therefore, this project will be developed based on Unity. Core Concepts & Project Focus Text-based adventure games, also known as Interactive Fiction (IF), one of the most famous series is Zork, developed by Infocom [1]. Montfort, in Twisty Little Passages [2], points out that the primary difference between IF and traditional narratives lies in its non-linear and multi-layered narrative structure. Interactive Fiction not only allows players to make choices but also creates multiple possibilities within the narrative through those choices. The storytelling aspect is not limited to linear progression; rather, it relies on players' decisions to impel. Based on this theory,the branching logic of the tool can provide users with an intuitive interface, allowing them to easily set up various scenarios and paths. On the other hand, a functional module that enables users to add critical plot points and interactive options ensures that the stories they create are rich in interactivity and impact. This modular design allows users to set different plot developments based on players ’ choices, enhancing the immersion of the story. Design Principles Interactivity is crucial in game design, and this can also be incorporated into the design of a text-based adventure game tool. A user-friendly design tool can guide creators to focus on game interactivity and make their development more efficient. During researching, I discovered several text-based adventure game tools created by others, which also follow certain game design principles. Rules of Play emphasizes that "game mechanics can be divided into independent modules, allowing different game elements to collaborate and create diverse interactions" [3]. Quest, designed by Warren [4], adopts a modular design approach, called command editing interface, which is used to define and edit commands, objects, and scenes within the game, allowing creators to manage these elements individually. Through modular design, creators can break down complex game interactions into manageable parts. Additionally, “meaningful choices” is another key impression in interactive design, where players' choices should lead to different consequences, providing a sense of control and immersion [3]. In the tool, this can be achieved through a branching narrative tree, enabling creators to design different plot directions or endings for each player choice. The TextAdventureToolkit, developed by Filer [5], uses a flowchart interface to script. stories. Thus, this tool will combine both approaches: modular design for characters, objects, and scenes, allowing creators to edit each element individually; and a flowchart for story progression and player choices, giving creators a more intuitive way to design player choice paths. AI Integration and LLM Application Previous research has shown that AI can accomplish three tasks in games: play a game, design a game, and model the human players [6]. Large Language Models (LLMs) are often presented as conversational agents and writing assistants, making them well-suited for the development of text-based adventure games. Therefore, this project will utilize the ChatGPT API to assist creators in game development. Additionally, LLMs can act as game analysts [7], simulating and analyzing player experiences and behaviors. The integration of AI will help creators better understand player perspectives during development. Multiplayer Functionality Developing multiplayer functionality will be one of the most challenging aspects of this project, as I have limited prior knowledge of multiplayer text-based adventure games and no experience in developing multiplayer features. However, through research, I found that Unity released Netcodefor GameObjects in July 2024 [8], which can enable multiplayer functionality. Additionally, a third-party package, Mirror [9], is considered a viable alternative if Netcodefor GameObjects is not suitable. Development Process & Method Since I have a clear goal for the Adventure Game Creator (Text-Based) tool and will be working independently, I plan to use an Incremental Development approach. This method allows me to gradually build, test, and refine individual features until the tool fully meets my aims and objectives. The progress will be simple and flexible. I will start by creating a basic prototype with essential features and continuously refine it through cycles of testing and improvement. Once the tool achieves the desired functionality, I will finalize it into the final product. The user interface and the system to create game elements and game logic are defined as the basic function. And the AI implementation and Multiplayer Functionality are the advanced ones. For development, I will use Unity3D as the main platform. with C# for programming. For version control, I will use GitHub to maintain backups of each iteration, along with regular physical backups in my laptop. Data Sources In this project, I will use the open-source ConvAI dataset [10] to assist in generating dialogue for NPCs within the text-based adventure game creation tool. This dataset provides a variety of dialogue patterns and responses, helping the tool generate natural and engaging NPC interactions, which enhance the immersion of the user-created game worlds. Testing & Evaluation Due to my choice of an incremental development process, unit tests will be conducted after each feature is developed to verify its functionality. Regression tests will be performed in subsequent increments to ensure that the addition of new features does not impact the performance of existing ones. Test Tool Selection: I am currently researching testing tools suitable for C# and Unity3D. The initial selections include: NUnit[11]: A commonly used C# unit testing framework, ideal for verifying the correctness of game logic and functions. UnityTest Framework[12]: Unity’s built-in testing framework, which can be used for integration tests on scripts and components in the game to ensure module compatibility. Play ModeTests[13]: Designed to test the game’s actual runtime experience, ensuring stability and that user experience meets expectations. During development, I will experiment with these tools and select the most suitable option based on project needs. Extensive testing will be required to ensure the proper functioning of game logic and event triggers, such as item pickups, quest initiation, and room transitions, to meet with user expectations. UI and Performance Testing: After completing all features, I will perform. UI and performance testing to ensure the tool’s interface is smooth and user-friendly. These tests will help confirm that the tool is technically feasible and meets the project’s requirements. Project Ethics & Human Participants The Data Categories will be B The ConvAI dataset will be used under its open -source license, solely for academic and research purposes. As the dataset contains no identifiable information, privacy risks are minimal. However, I will ensure responsible usage within the project scope. Any additional dialogue data required will be synthetically generated for testing only, strictly adhering to licensing requirements with appropriate attribution in project documentation. The Human Participant Categories will be 0 There is no use of human participants in any activity BCS Project Criteria Application of Practical and Analytical Skills This project will comprehensively test my practical and analytical skills, from the initial task breakdown to subsequent programming (C#), user interface design (UI/UX), game logic, and network integration. Through an incremental development approach, I can gradually build and validate each feature at every stage, achieving technical rigor and reliability (see "Development Process & Method" section). Innovation and/or Creativity The innovation of this project lies in designing a user-friendly tool that allows users to customize the world and story elements of text-based adventure games. The tool’s extensibility (such as potential multiplayer functionality and dynamic content generation) introduces new possibilities for creating text - based adventure games, enhancing the creative experience for users. Integration of Information, Ideas, and Practice to Deliver a High-Quality Solution and Solution Evaluation I will integrate various development tools (such as Unity3D and NUnit) to build and test the project, ensuring the quality and consistency of each module and function. Upon project completion, I will conduct a comprehensive evaluation to verify whether the tool meets the initial design requirements, including functionality, user experience, and performance (see "Testing and Evaluation" section). Meeting Practical Needs in a Broader Context The text-based adventure game creation tool has broad applications in game design, education, and interactive storytelling. This tool provides creators with a low-barrier way to realize their interactive stories, meeting current practical needs and trends in game creation and education. Ability to Independently Manage a Major Project This project is independently managed by me, following an incremental development approach to ensure clear goals and schedules at each stage. I will strictly follow the project plan, conducting regular reviews and adjustments to ensure on-time delivery and adherence to expected standards. Critical Self-Evaluation of the Process Throughout each stage of project development, I will conduct self-evaluations, particularly reflecting on the development approach and the effectiveness of feature implementations to identify improvement opportunities. After project completion, I will perform. a final self-evaluation to summarize the project's successes and lessons learned, providing valuable experience for future projects. Through these approaches, this project will comprehensively meet the six BCS project criteria, demonstrating not only my technical capabilities but also my qualities in innovation, project management, and self-assessment.
7SSGN110 Environmental Data Analysis | Practical 6 | Time Series Analysis – Time Domain 1. Introduction and data processing The aim of this practical is to introduce time series decomposition and autocorrelation. This case study data are monthly average atmospheric CO2 concentrations measured at the Mauna Loa Observatory, Hawaii, USA, a brief overview of which is provided by Keeling (2008). Data from Mauna Loa Observatory are available from the National Oceanic and Atmospheric Administration (NOAA) Earth System Research Laboratory Global Monitoring Division website: http://www.esrl.noaa.gov/gmd/obop/mlo/. We have downloaded the relevant data for you and posted this as an Excel file on KEATS. Download and open this file. As always, the first thing we should do when working with a new data set is to quickly plot the data to familiarise ourselves with it and check for any inconsistencies or potential problems: · Create a Single Line Plot that displays the value column. To show years on the x-axis, right-click on your graph → Select Data → Edit the horizontal axis → Column B (year) · Change the scale of the vertical axis so that you can clearly see the full variation of the time series. You should be able to immediately notice both an increasing trend and a seasonal pattern. · What do you notice about the pattern early in the time series? Hopefully, you notice that the early values in the time series do not fit the seasonal pattern as well as the remainder of the time series. Look at the data to see why this is. The data we downloaded has values for 1969 and 1970 before skipping to 1976 (check you can see this in your raw data). To make our data easier to handle we will work here with data that are for the complete years 1977-2020: · Remove the rows of data for 1969, 1970 and 1976 (select rows, right-click, delete). You should now have data for all months for 44 years. · How could you quickly check you really do have data for all months in these years? [Hint, for 44 years how many data values would you expect? How can you count values quickly to check?] 2. Time series decomposition 2.1. Time series decomposition in Excel Time series can be understood as being composed as shown by Eq. 1: Eq. 1 In the following we will see how we can decompose a time series into these components in Excel to help us describe and better understand the data. a) Trend: Running Mean One way of estimating a trend in a time series is to use a running mean. This approach calculates the average value of a variable across a sliding window of a specific width (duration). We usually aim for this window to be of equal size above and below the original data point, which is straight-forward for an odd-numbered size window but requires some additional thought when the window is even in size. For example, for a window of size five the running mean m at time t calculated from values x would be given by Eq. 2: Eq. 2 Thus, the running mean is calculated from the current value, the two data points before, and the two values after. To ensure an even-sized window is symmetrical we use a slightly different approach. For example, for a window of size six, the running mean would be given by Eq. 3: Eq. 2 Thus, the running mean is calculated as before except we account for only half the value of the data points at the edges of the window. To create a running mean for the Mauna Loa data we will use a window size of 12. To do this: I. In column E add a sensible column title for a running mean with window size 12 II. Assuming data values are in column D enter the following formula in cell E8: =SUM(D2/2, D3:D13, D14/2)/12 III. Copy this formula down to cell E523 Make sure you understand what the formula you’ve implemented in Excel is doing. · Why have we used a window of size 12? How would you change the formula to use window size 13? Why have we stopped at row 523? To check what your running means looks like, add the data you have created in column E to your existing quick plot of the data (re-create this if you deleted it!): i. Right-click on the chart ii. Click Select Data iii. In Legend Entries (series) click the Add button iv. Provide series name Running Mean (12) v. For series values, select the entirety of column E Your plot should look something like Figure 1. · Note how the running mean line does not extend to the length of the empirical data. Why is this? Figure 1. Example plot of CO2 data with running mean. Check your running mean is centred on the data, with equal number of missing values at start and end of the time series. b) Periodicity We now calculate the periodic signal in the data, using the trend data. First, for each observation we calculate the difference between the observed value and the trend (running mean) value. Calculate this difference in column F for all the running mean values (and provide a relevant column header label, e.g. “diff”). After you have done this, we determine the average difference for each of the 12 months using a pivot table: I. Select the entire data block (all columns and rows, including the headers) II. Go to menu: Insert – Pivot Table – From Table/Range, on a New Worksheet III. On this new PivotTable worksheet on the right-hand side: · drag the ‘month’ field from the top into the ‘Rows’ panel below · drag the ‘diff’ field from the top into the ‘Values’ panel at the bottom-right · change the Value Field Setting to calculate an average instead of a sum IV. On the left you should now see the average difference for each of the 12 months Look at the seasonal effect values you have just calculated. · You should notice some are positive and some are negative. Why is this? c) Residuals Finally, we will calculate the residuals at each time t once the trend and seasonal effect have been removed. First, populate column G with repeat copies of the seasonal cycle we compiled in the previous step. You may have to “hard copy” the values from the pivot table first. The residuals are then the remainder after subtracting the trend and the periodic signal from the original value (column D minus column E and minus Column G). A plot of these residuals should look something like this. Figure 2. Residuals from time series decomposition of Mauna Loa data. These residuals are now de-trended and should be ‘stationary’ such that the probabilistic character of the series does not change over time (i.e. any section of the time series is “typical” for every other section; but note there are different ‘degrees’ of stationarity). Such stationary data can then be subjected to auto-regressive and/or moving average modelling which can be used for forecasting (e.g. see Von Storch and Zwiers 2001). These techniques are beyond the level of this practical but you may be interested in pursuing them further elsewhere. 2.2. Time series decomposition in R We will now briefly introduce you to a powerful R function (‘DECOMPOSE’) which takes a time-series and splits it into the components like we did above. Before you start the data analysis in R, ensure that you have: • Created a folder on your computer for this week’s practical • Set the working directory for the practical work in R Studio • Loaded the required packages. We will be using the package timeSeries • Read the csv file with the CO2 data into R Hint: Refer to Practical 2, section 4.1 for guidance on how to do these things. We start by cleaning the data just as we did in Excel. First plot the data to visualise and notice the oddities prior to 1976. Remove the odd data from the dataframe. plot(mlo$value, type = "l") mlo 1976,] Now we can create a timeseries object using the data we have loaded. mlo.ts
Math Stats practice problems for final exam December 15, 2024 Problem 1 Suppose X0 = 0 and X1, . . . , Xn are given by the following Gaussian autoregressive process, i.e., the AR(1) model: Xi = θXi−1 + ϵi where θ ∈ [0, 1) is unknown, and the errors ϵi ∼ N(0, σ2 ) i.i.d. with σ 2 > 0 known. (a) Log-likelihood Function (b) MLE of θ (c) Least Squares Estimate of θ Alternatively, letting: Problem 2 Suppose you have a true model y = αeβx+ϵ , where ϵ ∼ N(0, σ2 ), and the following data: (a) Least Squares Estimates of α and β Transform. the data to get a linear model: ln(y) = ln(αeβx) = ln(α) + βx + ϵ Let z = ln(y), γ = ln(α). The estimated model is: (b) 95% Confidence Interval for β Problem 3 In a sociological study, 784 high school students were asked which two of ten given attributes were most desirable in their fathers. The following table summarizes the number of male and female students who included “being a college graduate” among the two. Did male and female students value this attribute differently? Test for Homogeneity Null hypothesis: H0 : pMale, included = pFemale, included, pMale, mitted = pFemale, omitted Test statistic: The number of degrees of freedom is (2-1)(2-1)=1. With α = 0.05 and 1 degree of freedom, the rejection region for a χ 2 1 test is X2 > 3.841. We reject H0 at α = 0.05 and even at α = 0.005. Problem 4 A survey was conducted to determine whether there is an association between age and preference for a type of movie. The responses are summarized in the following contingency table. Are age and movie preference related? Test for Independence • Null hypothesis (H0): Gender and movie preference are independent. • Alternative hypothesis (H1): Gender and movie preference are not independent. The expected count for each cell are The Pearson χ2 test statistic is The degrees of freedom for this test are (2-1)(3-1)=2. Using a significance level of α = 0.05, the critical value from the chi-squared distribution table is 5.99. Since X2 ≈ 10 > 5.99, we would reject the null hypothesis. Problem 5 Independent samples have been collected from two Gaussian distributions with the same variance. Use a t-test to test whether the means of the two distributions are equal. The data are as follows: The sample sizes are n1 = 25 and n2 = 30. (a) Hypotheses • Null hypothesis: H0 : µ1 = µ2 • Alternative hypothesis: H1 : µ1 ≠ µ2 (b) Test Statistic where: (c) Rejection region The degrees of freedom for a t test are n1 + n2 − 2 = 25 + 30 − 2 = 53. Using a significance level of α = 0.05, the critical value for |T| with df = 53 is bigger than ±2. So we cannot reject the null hypothesis at this significance level. Problem 6 A dataset contains the following values: 5, 7, 8, 12, 15, 18, 20, 21, 23, 25, 28, 30. Find the median and interquartile range. • First Quartile (Q1): Q1 = Median of the first half = Median of {5, 7, 8, 12, 15, 18} = 10 • Median (Q2): Q2 = Median of all data = 19 • Third Quartile (Q3): Q3 = Median of the second half = Median of {20, 21, 23, 25, 28, 30} = 24 So the IQR is 24 − 10 = 14.
Module code and Title Database Development and Design ( DTS207TC) School Title School of AI and Advanced Computing Assignment Title 002: Assessment Task 2 (CW) Submission Deadline 23:59, 24th Dec (Friday) Database Development and Design (DTS207TC) Assessment 002: Individual Coursework Due: Dec 24th, 2024 @ 23:59 Weight: 40% Maximum Marks: 100 Overview & Outcomes This course work will be assessed for the following learning outcomes: C. Illustrate the issues related to Web technologies and DBMS and XML as a semi-structured data representation formalism. D. Identify the principles underlying object-relational models. Submission You must submit the following files to LMO: 1)A report named as Your_Student_ID.pdf. 2)A directory containing all your source code, named as Your_Student_ID_code. NOTE: The report shall be in A4 size, size 11 font, and shall not exceed 8 pages in length. You can include only key code snippets in your reports. The complete source code can be placed in the attachment. Assessment Tasks Now we have some stock-related datasets in XML format (attached). We would like to put it on a website for users to query. 1) Browse through these XML files in the attachment, and define DTD and XML Schema for them. Use both definitions to validate the XML files and manually fix any potential errors. Extract the file headers from the XML Schema and convert the XML to CSV. Open the generated CSV with any editor and take a screenshot. (20 Marks) 2) Use flask_sqlalchemy in Flask to build an ORM for the CSV from task 1), and import the data into PostgreSQL. Manually draw an Entity-Relationship diagram for the three tables, take a photo, and include it in the report. (20 Marks) 3) Use Flask to implement the required web page as shown in the diagram, which includes a table with the necessary fields. To differentiate yourself, you can set the form. style. to your preference and take a screenshot. (20 Marks) 4) Based on task 3), add filtering functionalities for stock name, start time, and end time, implementing a page as shown below. Note that one or more of these filter conditions can be empty, meaning no filtering based on that condition. To differentiate yourself, you can set the form. style. to your preference and take a screenshot. (20 Marks) 5) Use the provided testing program to perform a performance test on task 4). The program uses a POST request to query with all conditions set to empty, which should return the full result set. As long as the returned content is correct, you can optimize performance in any way. Take a screenshot of the test results. Ideal performance should be no higher than 0.2 seconds per query. (20 Marks) NOTE: a. Provide a brief introduction to the program logic in your own words; including code snippets is encouraged, but please do not directly paste the entire program into the report without explanation; b. For your full academic development, the use of generative AI to gain inspiration is allowed for this assignment; however, out of mutual respect, please do not directly paste its output into your assignment and submit it; c. To prove that you have indeed completed this assignment and did not rely solely on generative AI, please provide screenshots of the running results for each task.
BUSI2157-E1 A LEVEL 2 MODULE, AUTUMN SEMESTER 2020-2021 MANAGEMENT ACCOUNTING Section B (40 marks) Answer ONE question Question 11 Process III is part of the Super Group. Following information are for the month ending October, 2020. Opening WIP 2,000 units at £25,750 Transfer from Process II 53,000 units at £411,500 Transferred to Process IV 48,000 units Closing stock for Process III 5,000 units Units scrapped 2,000 units Direct materials added in Process III £197,600 Direct wages £97,600 Production Overheads £48,800 Degree of completion: Opening Stock Closing Stock Scrap Materials 80% 70% 100% Labour 60% 50% 70% Overheads 60% 50% 70% The normal loss in the process was 5% of completed production and scrap was sold at £3 per unit. Required: (a) Use First-in-first-out (FIFO) method to value equivalent production. (12 marks) (b) Prepare process cost accounts for Process III. (10 marks) (c) Prepare any other loss/gain accounts. (8 marks) The final products in Process III are also produced in other divisions within the Super Group and a limited quantity can be purchased from outside the group. The product is currently charged out by Process III at total actual cost plus 20 per cent profit mark-up. (d) Explain why the current transfer pricing method used by Process III is unlikely to lead to: (i) Maximisation of group profit (ii) An effective divisional performance measurement. (10 marks) (Total 40 marks) Question 12 ABC Limited, producing a range of minerals, is organised into two trading groups: one handles wholesale business and the other sales to retailers. One of its products is moulding clay. The wholesale group extracts the clay and sells it to external wholesale customers as well as to the retail group. The production capacity is 2,000 tonnes per month but at present sales are limited to 80% capacity: 1,000 tonnes wholesale and 600 tonnes retail. The transfer price was agreed at £200 per tonne in line with the external wholesale trade price. The retail group produces 100 bags of refined clay from each tonne of moulding clay which it sells at £4.00 a bag. It would sell a further 400 tonnes, if the retail trade price were reduced to £3.20 a bag. Other data relevant to the operation are: Wholesale group £ Retail group £ Variable cost per tonne 70 60 Fixed cost per month 100,000 40,000 Required: (a) Prepare estimated profit statement in variable costing format, showing contribution margin, for the month of December for each group and for ABC Limited as a whole based on transfer prices of £200 per tonne when producing at: i) 80% capacity; and calculate the revised profit for each group and for ABC Limited as a whole when producing at: ii) 100% capacity utilising the extra sales to supply the retail trade. (22 marks) (b) Comment on the results achieved under a). (5 marks) (c) Suggest an alternative transfer price for the retail sales which would provide greater incentive for increasing sales, detailing any problems that might be encountered. (5 marks) (d) Explain how managers would determine a range of acceptable transfer prices. (8 marks) (Total 40 marks) Section C (40 marks) Answer ONE question Question 13 The XYZ Partnership operates a deer farm, selling high-quality venison for both domestic and export markets. Due to a temporary recession in the market for venison, the farm has surplus capacity, which the partnership is anxious to utilise for short-term profit, if at all possible. The Ministry of Agriculture and Fisheries has asked the partnership to provide a quotation for use of the farm’s facilities and livestock in order to undertake a one-year experiment into the effect of different feeding methods. XYZ Partnership’s management is keen to obtain this work, as it appears to be an ideal way of profitably using the existing spare capacity. The following information is available: Alternative feed This will be supplied as required by the Ministry of Agriculture and Fisheries, resulting in a saving in feed purchase costs to the farm of £4,000. However, the farm’s existing stock of feed will need to be disposed of at a cost of £1,600, since it cannot be used during the period of the experiment, nor can it be stored until this is complete. Special feeding equipment will be supplied by the Ministry and the farm will need to remove and store its own equipment during the experiment. This will cost £750 in total, but, whilst in storage, the equipment can be refurbished, thereby saving future costs of £200. Labour costs The farm will not need to employ any additional workers in order to undertake the experiment. However, the more intensive feeding methods required by the experiment will mean that work which the employees concerned could otherwise do must be subcontracted. If not used on the experiment, the workers concerned could either: 1. Clear some scrubland and make it suitable for grazing, saving subcontracting costs of £2,000; or 2. Undertake repairs to some of the farm’s outbuildings, which would save subcontracting costs of £3,400 but would incur material costs of £600. Irrespective of whether their time is spent on the experiment or on one of the other projects above, 300 hours’ work is involved. The farmhands concerned are paid £4 per hour. Other costs The Ministry of Agriculture and Fisheries will supply a desktop computer and necessary software on which to record the results of the experiment. The farm manager will need to attend a training course in London before he can use this recording system; travel and subsistence costs incurred by XYZ Partnership as a result will be £2,200. While the farm manager is absent on this training course, it will be necessary to recall the chargehand from holiday, for which he will receive a special payment of £400. The deer farm’s fixed costs for the period of the experiment are estimated at £28,000 and will increase by £2,700 due to extra administration if the experiment is undertaken. Quotation XYZ Partnership’s management reckons that a quotation of less than £7,000 is likely to be successful in obtaining the work. Required: (a) Calculate the net relevant cost or net relevant benefit to XYZ Partnership of a decision of the lowest quotation price which is financially viable. (20 marks) (b) State, and briefly explain, if any other relevant considerations which management should bear in mind when making their decision. (10 marks) (c) Explain why, in relevant costing, contribution theory is used as a basis for providing information relevant to decision-making. (10 marks) (Total 40 marks) Question 14 Division B and Division C are two divisions of a large, manufacturing company. Whilst both divisions operate in almost identical markets, each division operates separately as an investment centre. Each month, operating statements must be prepared by each division and these are used as a basis for performance measurement for the divisions. Last month, senior management decided to recharge head office costs to the divisions. Consequently, each division is now going to be required to deduct a share of head office costs in its operating statement before arriving at ‘net profit’, which is then used to calculate return on investment (ROI). Prior to this, ROI has been calculated using controllable profit only. The company’s target ROI, however, remains unchanged at 20% per annum. For each of the last three months, Divisions B and C have maintained ROIs of 22% per annum and 23% per annum respectively, resulting in healthy bonuses being awarded to staff. The company has a cost of capital of 10%. The budgeted operating statement for the month of July is shown below: B C £’000 £’000 Sales revenue 1,300 1,500 Less variable costs (700) (800) Contribution 600 700 Less controllable fixed costs (134) (228) Controllable profit 466 472 Less apportionment of head office costs (155) (180) Net profit 311 292 Divisional net assets 23.2m 22.6m Required: (a) Calculate the expected annualised Return on Investment (ROI) using the new method as preferred by senior management, based on the above budgeted operating statements, for each of the divisions. (4 marks) (b) The division managing directors are unhappy about the results produced by your calculations in (a) and have heard that a performance measure called ‘residual income’ may provide more information. Calculate the annualised residual income (RI) for each of the divisions, based on the net profit figures for the month of July. (6 marks) (c) Discuss the expected performance of each of the two divisions, using both ROI and RI, and making any additional calculations deemed necessary. Conclude as to whether, in your opinion, the two divisions have performed well. (12 marks) (d) Division B has now been offered an immediate opportunity to invest in new machinery at a cost of £2.12 million. The machinery is expected to have a useful economic life of four years, after which it could be sold for £200,000. Division B’s policy is to depreciate all of its machinery on a straight-line basis over the life of the asset. The machinery would be expected to expand Division B’s production capacity, resulting in an 8.5% increase in contribution per month. Recalculate Division B’s expected annualised ROI and annualised RI (using closing net assets value), based on July’s budgeted operating statement after adjusting for the investment. State whether director will be making a decision that is in the best interests of the company as a whole if ROI is used as the basis of the decision. (10 marks) (e) Explain any behavioural problems that will result if the company’s senior management insist on using solely ROI, based on net profit rather than controllable profit, to assess divisional performance and reward staff. (8 marks) (Total 40 marks)
Graduate Entrance Exam, VERSION A Revised Fall 2023 Fundamentals (scales, intervals, chords, voice leading, rhythm, etc.) Voice leading. For questions 1 – 5, refer to the following example. Choose the appropriate chord (labeled A – K) to identify where the error occurs. 1. A problem with spacing between adjacent voices within a chord. 2. An instance of a doubled leading-tone. 3. An instance of parallel fifths. (List the two chords between which the problem occurs.) and 4. An instance of parallel octaves. (List the two chords between which the problem occurs.) and 5. The third of the chord is missing. Intervals and Chords. For questions 6 – 10, circle the correct answer. 6. What note lies a diminished fifth above Bb? a. F b. E c. F d. F e. other 7. What note lies an augmented sixth below F# ? a. G b. A c. A d. A e. other 8. Which of the following is a half-diminished seventh chord? 9. Which of the following is an augmented triad? 10. Which of the following is a major seventh (not major-minor seventh) chord? Scales and collections. For questions 11 – 12, refer to the following example: 11. Which of these is a harmonic minor scale? 12. Which of these is a melodic minor (ascending) scale? For questions 13 – 15, refer to the following example. Choose “G” if none of the above. 13. Which of these is an octatonic scale? 14. Which of these is a whole-tone scale? 15. Which of these is a Lydian scale? Rhythm and meter. For the following questions, circle the correct answer. 16. The following measure is incomplete. Which rest best completes the measure? 17. The following measure is incomplete. Which rest best completes the measure? 18. The following measure is incomplete. Which rest / group of rests best completes the measure? 19. The following rhythms sound identical but are beamed differently. Circle the one that best represents the notated meter. 20. The following rhythms sound identical but are beamed differently. Circle the one that best represents the notated meter. 21. Identify the correct meter signature. 22. Identify the correct meter signature. 23. Identify the beat and meter type for the following example: a. simple duple b. simple triple c. compound duple d. compound triple 24. Identify the beat and meter type for the following example: a. simple duple b. simple triple c. compound duple d. compound triple Non-chord tones and harmonic analysis. For questions 25 – 30, refer to the following example. Choose from the following list of non-chord tones: a. passing tone b. neighbor tone c. appoggiatura d. neighbor group e. anticipation f. escape tone g. pedal point h. suspension (identify type: ) 25. What type of non-chord tone occurs at #1? 26. What type of non-chord tone occurs at #2? 27. What type of non-chord tone occurs at #3? 28. What type of non-chord tone occurs at #4? 29. This excerpt is in the key of G major. Identify the chord at #5. 30. Identify the chord at #6. ANALYSIS SECTION 1. For excerpts A and B below, identify the local tonic key in each excerpt. Identify the tonic note and mode, e.g. f# minor. (Hint: These excerpts are taken from the middle of a piece, so your answers may not match the key signatures.) Excerpt A: Excerpt B: 2. For excerpts C – E, identify the cadence marked with a bracket or box. Use the letters below: a. Perfect Authentic b. Imperfect Authentic c. Half d. Phrygian Half e. Plagal f. Deceptive Excerpt C: Excerpt D: Excerpt E: 3. Which term best describes the pitch collection used in Excerpt F (below)? a. Whole-tone b. Chromatic c. Diatonic d. Octatonic e. Pentatonic 4. Provide a Roman numeral and inversion-symbol label of each of the boxed chords in Excerpt G below. a. b. c. 5. Answer the following questions about excerpt H below. a. Provide the numerical set-class label, in prime form, for the set labeled “a” in the score. b. Provide the numerical set-class label, in prime form, for the set labeled “ b” in the score. c. The interval-class vector for set “a” is: (circle one) a. 004002 b. 111111 c. 102111 d. 100011 d. The interval-class vector for set “ b” is: (circle one) a. 004002 b. 111111 c. 102111 d. 100011 e. Set “ b” is related to set “c” . What is the relationship? (circle one) a. Transposition b. Inversion c. Transposition and inversion d. Z-relation
COMP1036 Coursework Part II (25 marks) Tasks Write a program in Hack Assembly Language that sorts an array of integers in ascending or descending order. The unsorted array contains 5 or more elements, located at a range of memory locations starting from RAM[50]. The integers in the array can be positive, negative, or zero. The program should allow you to sort either the entire array or a portion of it. The number of elements to be sorted is determined by the integers stored in RAM[0] and RAM[1], as described below: Read two input values,X andY, from RAM[0] and RAM[1], respectively, and output the computed result, Z, to RAM[2]. X and Y can be positive or negative. The program should function correctly regardless of whether X < Y, X > Y, or X = Y. Different rules will be applied based on whether X and Y are even or odd integers, as follows: (1) IF both X and Y are even integers, THEN Z is the sum of all even integers between X and Y (inclusive). (2) IF both X and Y are odd integers, THEN Z is the sum of all odd integers between X and Y (inclusive). (3) IF one of X or Y is odd and the other is even, THEN Z is the sum of all integers between X and Y (inclusive). (4) IF X = Y, THEN Z = X or Z = Y. (5) IF Z is positive, THEN sort the array in ascending order. (6) IF Z is negative, THEN sort the array in descending order. (7) IF Z is zero, THEN no sorting should be done. Example 1: Given RAM[0] = X = -4; RAM[1] = Y = 2 The range of integers between X and Y (inclusive) is [-4, -3, -2, -1, 0, 1, 2]. Applying Rule (1): RAM[2] = Z = (-4) + (-2) + 0 + 2 = -4. Applying Rule (6) for Z = -4: Sort the first 4 elements of the array in descending order. Example 2: Given RAM[0] = X = -5; RAM[1] = Y = 5 The range of integers between X and Y (inclusive) is [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]. Applying Rule (2): RAM[2] = Z = (-5) + (-3) + (-1) + 1 + 3 + 5 = 0. Applying Rule (7) for Z = 0: No sorting should be done. Example 3: Given RAM[0] = X = 2; RAM[1] = Y = 3 The range of integers between X and Y (inclusive) is [2, 3]. Applying Rule (3): RAM[2] = Z = 2 + 3 = 5. Applying Rule (5) for Z = 5: Sort the first 5 elements of the array in ascending order. Example 4: Given RAM[0] = X = 3; RAM[1] = Y = 3 The range of integers between X and Y (inclusive) is [3]. Applying Rule (4): RAM[2] = Z = 3. Applying Rule (5) for Z = 3: Sort the first 3 elements of the array in ascending order. Some Input and Output Examples: n RAM[3], RAM[4],…, RAM[49]RAM[50]3332313535-3-5-3-13332-3-5RAM[51]2223222324-2-4-2-22223-2-4RAM[52]5555535253-5-3-5-35555-5-3RAM[53]1111141112-1-2-1-41111-1-2RAM[54]4444454441-4-1-4-54444-4-1
Econ 1150 Mini-Exam 4 1. You have estimated the following regression using data on 46,670 top executives in US corporations in 1990, where Earnings is their annual income earned, F emale is a binary variable that is equal to 1 if the individual is female and 0 otherwise, and M arketV alue and Return are the market value and stock market return of their corporation, respectively. (a) Test the null hypothesis that the coefficient on Return is equal to zero at the 5% significance level. So, we reject the null hypothesis at the 5% level of significance. (b) Test the null hypothesis that the coefficients on all the explanatory variables are equal to zero at the 5% significance level. (Note: The 5% critical value of the F3,∞ distribution is 2.60.) We test the null hypothesis H0 : βF emale = 0, βlog(MarketV alue)=0,βReturn=0. There are 3 regressors in the unrestricted regression (k = 3) and 3 restrictions according to H0 (q = 3). Note that the R2 of the restricted regression, RR 2 , is equal to zero since there are no regressors in the restricted regression. So, the F-statistic to test H0 is: F is distributed according to the F3,∞ distribution. Since F = 8193 > 2.60, the 5% critical value of the F3,∞ distribution, we reject the null at the 5% level of significance. 2. We have the population regression: Yi = β0 + β1X1i + β2X2i + β3X3i + ui (a) Transform. the regression in order to use a hypothesis test on a single coefficient to test H0 : β1 = β2 + β3. H0 : β1 = β2 + β3 ⇐⇒ β1 − β2 − β3 = 0. So, we transform. the regression as follows: Y = β0 + β1X1 + β2X2 + β3X3 + u = β0 + β1X1 + β2X2 + β3X3 + u − β2X1 + β2X1 − β3X1 + β3X1 = β0 + (β1 − β2 − β3)X1 + β2(X2 + X1) + β3(X3 + X1) + u Y = β0 + γX1 + β2V1 + β3V2 + u, where γ = β1 − β2 − β3, V1 = X2 + X1 and V2 = X3 + X1. (b) Describe how you would estimate this regression, construct the appropriate t-statistic, and test H0 at the 5% level of significance. We estimate the transformed regression and obtain the coefficient estimate ˆγ. Then we construct the appropriate t-statistic to test H0 : β1 − β2 − β3 = 0 ⇐⇒ γ = 0: We reject H0 at the 5% level of significance if |t| > 1.96. 3. Using American Community Survey Data, we obtain the regression results in Parts (a) and (b). The variables are defined as follows: ❼ incwage: Income from wages ❼ log(incwage): The natural logarithm of incwage ❼ female: Binary variable that equals to 1 if the individual is female and 0 otherwise ❼ col: Binary variable that equals to 1 if the individual is a college graduate and 0 otherwise ❼ col × female: Interaction term between educ and female (a) What is the marginal effect on income associated with going to college for a woman? What is the marginal effect on income associated with going to college for a man? The marginal effect on wages associated with attending college for women is: 53940.86 − 24024.45 = 29916.41 The marginal effect on wages associated with attending college for men is: 53940.86 (b) What type of logarithmic regression is this (log-linear, linear-log, or log-log)? Interpret the coefficient on col. (Note: You don’t have to algebraically derive this interpretation but you can check it if you want to.) This is a log-linear regression. Going to college is associated with a 69.9% increase in income from wages.
Language Feature The user manual of this language is listed. Keywords: · “let”: initiates a variable declaration. · “be”: acts as the assignment operator in other programming language. · “int”: specifies the variable is of integer type during its declaration. · “set”: specifies the variable is of set type during its declaration. · “show”: acts as the main function in other programming languages, which initialize a calculation. Data types: · Integer: o Basic data type o A single-digit integer is an arbitrary decimal number. o A multi-digit integer starts with a non-zero decimal number and followed by any sequence of arbitrary decimal numbers. o Users can only declare non-negative integers. Negative integers are constructed through subtraction operations. · Arithmetic expression: o Constructed data type o Users are not allowed to declare an arithmetic expression. There is no specific data type keyword for arithmetic expressions in the language. This data type only exists in the compiler. You can also check the keywords. There is no data type keyword for arithmetic expressions. o An atomic arithmetic expression is either an integer constant or an integer variable. o A compound arithmetic expression consists of two arithmetic expressions connected by an arithmetic operator (addition “+”, subtraction “-”, multiplication “*”). And parentheses are used to define substructures within expression. For example, 1 + 2 – 3 * 4 is parsed as: ( 1 + 2 ) - ( 3 * 4 ) · Predicates o Constructed data type o Same as arithmetic expression, predicates are not directly declared by users but are instead managed within the compiler. o An atomic predicate is a relational comparison, which can be of two types: - Integer value comparison: involving the comparison of two integers using relational operator less than (“”), or equality (“=”); or - Membership testing: used to determine if an element is a member of a set using the membership operator “@”. o A compound predicate is formed by combining “smaller” predicates (either atomic or compound) using logical operators: - Binary logical operators: two (atomic or compound) predicates connected by a conjunction (“&”) or disjunction (“|”) - Unary logical operators: Negation (“!”) which precedes another predicate. For example, P & Q and ! R where P, Q, and R are predicates. o Parentheses are also used to define substructures in predicates. Parentheses are essential for defining the precedence and grouping of operations within predicates, ensuring the correct evaluation of complex expressions. · Bool: o Basic data type o Cannot be declared by users o Has only two constants: “true” and “false” o A Boolean is produced by the evaluation on a predication without uninitialized variables. For example, x > 5 is a predicate if variable x has not been initialized. But if x has been initialized as 3 previously, then the predicate becomes 3 > 5 and can be evaluated to be Boolean “false”. The behavior. of “>” will be explained later in this document. · Set: o Constructed data type o Can be declared by users o A set is defined in using this syntax within the language { x : P(x) } where - a set defnition is enclosed within curly braces “{ }”; - “x” is a variable name called representative, whose scope is limited within this set definition; - “:” is another punctuation separating the representative x from the rest part of the definition; - “P(x)” is a predicate that applies to the variable x, serving as the characteristic function of the set, which performs a logical test on x. If P(x) evaluates to true, then x is an element of the set; otherwise, x is not in the set. o This project focuses solely on sets of integers. Other types of sets, such as sets of strings, pairs, or sets of sets are not included. This limitation is intentional, aiming to simplify the implementation process. Goliath is trying to make your life easy! · Void: o Basic data type o Cannot be declared by users o For subexpressions without any type Identifier: arbitrary strings of English letters in lower case and are not reserved by keywords. Operators: · Arithmetic operators: o “+”: integer addition, calculates the sum of two integers o “-”: integer subtraction, calculates the difference of two integers o “*”: integer multiplication, calculates the product of two integers o Multiplication has the highest precedence. Addition and subtraction have equal precedence, which is lower than multiplication. · Relational operators (for integers): o “” (Greater Than): returns “true” if the integer on the left-hand side is greater than the integer on the right-hand side integer o “=”: (Equal), returns “true” if the integer on the left-hand side is equal to the integer on the right-hand side integer · Relational operator (membership): o “@”: This operator checks if the element on the left-hand side is a member of the set on the right-hand side. It returns “true” if the element is in set, otherwise it returns “false”. · Logical operators: o conjunction “&”, disjunction “|”, and negation “!” are behaved as the following. Truth Tables for Logical Operators: & true false | true false true true false true true true x true false false false false false true false !x false true Conjunction table Disjunction table Negation table o Negation has the highest precedence, then conjunction, and then disjunction. · Set operators: o “I”: set intersection, calculates the intersection of two sets o “U”: set union, calculate the union of two sets o Intersection has a higher precedence than union. Sentences: · Each source code contains zero, one, or multiple variable declaration(s); and exactly one calculation expression. · Each variable declaration is in the following syntax let T id be E . where o T is a type name (either int or set), o id is a variable identifier, o E is an expression that assigns a value (a set definition is also a “value”, even it is not a number) of the specific type to the variable. o The period “.” Marks the end of the declaration. For example, let int a be 5 . defines an integer a whose value is 5. And let set s be { x : x = 5 } . defines a set s which contains only one integer 5. · A calculation expression starts with keyword “show” and followed by an algebraic expression, which can be an arithmetic expression, a Boolean expression (a predicate with all variables initialized), or a set algebra expression. A calculation expression is also ended by a full stop “.”. For example, o To calculates the union of sets S1 and S2: show S1 U S2 . o To calculates the sum of integers 1 and 2: show 1 + 2 . o To test if integer 3 in S1 or not: show 3 @ S1 . Output: After the calculation, the program prints the outcome on the screen (type and value). (see the examples below). For set operators, the “show” statement does not simplify the characteristic function. For example, show { x : x > 3 } U { x : x > 5 } . will output { x : ( x > 3 ) | ( x > 5 ) } rather than { x : x > 3 } even though the two sets are equivalent. This requirement will make this project easy. The simplification of predicates is an advanced feature and will be a bonus and explained later.
Start date: 12/7/2024Due date: 12/21/2024MATH GR5280, Capital Markets & InvestmentsFinal ProjectNote: All files and information related to the final project are uploaded into the Modules starting with “Final Project” prefix on CourseWorks.The aim of this Final Project is to practically implement the ideas from the course, specifically from Chapters 7 and 8 of [BKM13]. Using Bloomberg, you will be given a recent 20 years of recent historical daily total return data for ten stocks, which belong in groups to three-four different sectors (according to Yahoo!finance), one (S&P 500) equity index and a proxy for risk-free rate (1-month Fed Funds rate). Additionally, you will be given contemporaneous ESG [ESG3] scores data also from Bloomberg for all of your companies with detailed explanations to them. In order to reduce the non-Gaussian effects, you will need to aggregate the daily data to the monthly observations, and based on those monthly observations, you will need to calculate all proper optimization inputs for the full Markowitz Model (“MM”), alongside the Index Model (“IM”). Using these optimization inputs for MM and IM you will need to find the regions of permissible portfolios (efficient frontier, minimal risk portfolio, optimal portfolio, and minimal return portfolios frontier) for the following four cases of problems:1. This optimization is designed to simulate the typical limitations existing in the U.S. mutual fund industry: a U.S. open-ended mutual fund is not allowed to have any short positions, for details see the Investment Company Act of 1940, Section 12(a)(3) (https://www.law.cornell.edu/uscode/text/15/80a-12):wi ≥0,for∀i;2. Now, having the efficient risky portfolio {wˆ }10 for the solution for the above problem 1, you will i i=1need to solve the problem 1 above with the following constraint on ESG:10 ( E + S + G ) w ≤ 0 . 9 ×10 i=1(https://www.finra.org/rules-guidance/key-topics/margin-accounts), which allows broker-dealers to allow their customers to have positions, 50% or more of which are funded by the customer’s account equity:iiii( E + S + G ) wˆ ; iiiii=13. This optimization constraint is designed to simulate the Regulation T by FINRA 4. Lastly, having the efficient risky portfolio {wˆ }10 for the solution for the above problem 3, you i i=1will need to solve the problem 3 above with the following constraint on ESG:10 ( E + S + G ) w ≤ 0 . 9 ×10i=1iiii i=1( E + S + G ) wˆ . iiiiYou will need to numerically solve the above problems using the template “FinalProject AlexeiChekhlov Group0.xlsx” and submit your numerical solutions as such file, with filename adjusted with your “FinalProject FirstnameLastname Group(your group#).xlsx”. Please, do not insert or delete any cells, keep the existing format – it is very nicely done and the graphs will allow you to “see” your solutions. The areas of cells that you will need to fill-in with your numerical solutions are as follows. The points for MM:110w ≤2;i i=1Start date: 12/7/2024Due date: 12/21/2024MATH GR5280, Capital Markets & InvestmentsP2:AC3, P5:AC6, P8:AC9, P11:AC12. The curves (frontiers) for MM: C33:F113, I33:L113, O33:R153. The points for IM: AI2:AV3, AI5:AV6, AI8:AV9, AI11:AV12. The curves (frontiers) for IM: AM33:AP113, AS33:AV113, AY33:BB153. The grading will be done by comparing your tabulated results to exact solutions. The calculations should be done on a Windows computer with licensed Microsoft Office installed.Again, you will be given 20 years of daily data of total returns for the S&P 500 index (ticker symbol “SPX”), and for ten stocks (ticker symbols see the table below) such that there are three-four sectors of stocks with stocks in each group belonging to one (Yahoo!finance) sector and an instrument representing risk-free rate, 1-month annual Fed Funds rate (ticker symbol “FEDL01”). Note that stocks in each group are completely different. Therefore, each group will have its own results and conclusions.Below, please, find the table of stock ticker symbols (aka, tickers) for each group to work with: Group #1ADBEIBMSAPBACCWFCTRVLUK FDX PG KO ALK JBHT JNJ PEPHA LSTR CL MCDGroup #2Group #3Group #4 Stock #1 Stock #2 Stock #3 Stock #4 Stock #5 Stock #6 Stock #7 Stock#8 Stock #9 Stock #10AMZN AAPL CTXS JPM BRK/A PGR UPSNVDA CSCO INTC GS USB TD CN ALLQCOM AKAM ORCL MSFT CVX XOM IMO Below, please, find the table which shows the details for each of the stocks and which stocks belong to the same sector in each group.2Start date: 12/7/2024Due date: 12/21/2024MATH GR5280, Capital Markets & Investments # Group #11 ADBE2 IBM3 SAP4 BAC5 C6 WFC7 TRV8 LUV9 ALK10 HA# Group#21 AMZN2 AAPL3 FFIV4 JPM5 BRK/A6 PGR7 UPS8 FDX9 JBHT10 LSTR# Group#31 NVDA2 CSCO3 INTC4 GS5 USB6 TDCN7 ALL8 PG9 JNJ10 CL# Group#41 QCOM2 AKAM3 ORCL4 MSFT5 CVX6 XOM7 IMO8 KO9 PEP10 MCDFull NameAdobe Inc.International Business Machines Corporation SAP SEBank of America CorporationCitigroup Inc.Wells Fargo & CompanyThe Travelers Companies, Inc.Southwest Airlines Co.Alaska Air Group, Inc.Hawaiian Holdings, Inc.Full NameAmazon.com, Inc.Apple Inc.F5 Networks, Inc.JPMorgan Chase & Co.Berkshire Hathaway Inc.The Progressive Corporation United Parcel Service, Inc. FedEx Corporation J.B.HuntTransportServices,Inc. Landstar System, Inc.Full NameNVIDIA CorporationCisco Systems, Inc.Intel Corporation TheGoldmanSachsGroup,Inc. U.S.Bancorp TheToronto-DominionBank TheAllstateCorporation TheProcter&GambleCompany Johnson & Johnson Colgate-Palmolive CompanyFull NameSector (Yahoo!finance)Technology Technology Technology Financial Services Financial Services Financial Services Financial Services Industrials Industrials IndustrialsSector (Yahoo!finance)Consumer Cyclical Technology Technology Financial Services Financial Services Financial Services Industrials Industrials Industrials IndustrialsSector (Yahoo!finance)Technology Technology Technology FinancialServices FinancialServices FinancialServices FinancialServices ConsumerDefensive Healthcare Consumer DefensiveSector (Yahoo!finance)Technology Technology Technology TechnologyEnergyEnergyEnergyConsumer Defensive ConsumerDefensive Consumer CyclicalQUALCOMM Incorporated Akamai Technologies, Inc. Oracle Corporation Microsoft Corporation Chevron Corporation Exxon Mobil Corporation ImperialOilLimitedThe Coca-Cola Company PepsiCo,Inc. McDonald's Corporation3Start date: 12/7/2024Due date: 12/21/2024MATH GR5280, Capital Markets & InvestmentsUsing this data and the template Excel spreadsheet you will need to make all the necessary calculations to produce the Permissible Portfolios Region, which combines the Efficient Frontier, the Minimal Risk or Variance Frontier, and the Minimal Return Frontier for a given set of constraints (1-4 above). The Minimal Return Frontier and the Efficient Frontier together are forming the Minimal Risk or Variance Frontier – it is just a matter of reformulating the optimization problem, as follows:Minimal Risk or Variance Frontier:Minimal Return Frontier:Efficient Frontier:;Two unique points that you need to find on the Efficient Frontier are of special interest:Minimal Risk Portfolio:andEfficient Risky Portfolio:This Final Project in an open-book which means that you can and should use the Instructor’s handouts and the corresponding Chapter copy reading material provided by the Instructor, as well as any additional materials provided to you. Instructor and TAs have performed all these calculations for each of the group’s portfolios and will be able to compare your numbers, specific points to theirs. If your spreadsheet calculations are done correctly, you and we should be able to match the results with sufficient accuracy.The main tool that we would like you to use to solve the optimization problems for each point on the Minimal Risk or Variance Frontier is the Excel Solver. Please, try to learn how to use it on your own, if you have not done so already. The TAs will be helping you to address any issues related to Solver during the TAs sessions. To calculate large numbers of multiple points on any of the required frontiers, you will need to use the Excel Solver Table, which the TAs will teach you how to install and use. Both Excelrrσ(w)→min rw s u b j e c t t o : r ( w ) = c o n s t;;rr(w)→minw s u b j e c t t o : σ ( w ) = c o n s trrr(w)→maxrrw s u b j e c t t o : σ ( w ) = c o n s tr{σ ( w ) → m i n ; rwwr r(w) → max . rrr σ(w)4Start date: 12/7/2024Due date: 12/21/2024i i=1MATH GR5280, Capital Markets & InvestmentsSolver and Excel Solver Table will also be covered in lectures with illustrations which are very similar to your Final Project.For your calculations, you need to use the full available historical data range:• start date 2/28/2003;• end date 3/6/2023.As it was mentioned above, you will need to calculate the solutions to two optimizations covered in lectures:• The full Markowitz Model (MM);• The Index Model (IM).As we have described this in detail above, each of these optimization problems MM and IM you will need to implements and solve with the following additional four optimization constraints:1. wi ≥0,for∀i;2. wi ≥0,for∀i,and (E +S +G )w ≤0.9× (E +S +G )wˆ103. w ≤2;1010 i=1iiii i=1( E + S + G ) wˆ , iiii10 10 10 i=1 i=1 i=14 . w ≤ 2 a n d i( E + S + G ) w ≤ 0 . 9 × iiiiwhere {wˆ }10 in each case corresponds to the efficient risky portfolio solution of the corresponding non i i=1ESG-constrained problem.As we have already mentioned, your task is to produce the following objects on the Permissible Portfolios Region in the numerical (and the template spreadsheet does it in the graphical for you) form:• Minimal Risk or Variance Frontier (a curve), range for portfolio returns: from -10% to 50% with step of 0.5%;• Global Minimal Risk or Variance Portfolio (a point);• Maximal Sharpe Ratio or Efficient Risky Portfolio (a point);• Maximal Return or Efficient Frontier (a curve), range for portfolio standard deviation: from 10%to 50% with step of 0.5%;• Capital Allocation Line or CAL (a straight line);• Minimal Return or Inefficient Frontier (a curve), range for portfolio standard deviation: from 10%to 50% with step of 0.5%.The curves above must be produced in tabular form (Excel), using the template provided, preserving the formats in the template, with which comparison to exact solution will be made for grading, using specifically the above ranges. If a numerical solution cannot be found, just leave the corresponding cell empty. The points above should also be tabulated. All the tabulation should be done similar to example provided by the Instructor (see the file “Final Project Group0.xlsx” provided).5iiiiMATH GR5280, Capital Markets & InvestmentsStart date: 12/7/2024 Due date: 12/21/2024Do not hesitate to ask TAs, Lecturer any questions.You are given two weeks to complete the Final Project and to submit the work in the form of .xlsx file through the portal. We encourage you not to delay starting the work as workload is meant for several days of work and not as a one-night effort.Final Project is due on December 21st, 2024 at 7:00 PM EST.Your spreadsheet should be named using the following convention which is similar to the homework naming convention: “FinalProject AlexeiChekhlov Group0.xlsx”. Here “0” is the number of your Final Project Group, and instead of “Alexei Chekhlov” should be your name in the following format: “FirstNameLastName”.Good luck!References:[BKM13] Z. Bodie, A. Kane, A. J. Marcus, “Investments”, Thirteenth Edition, McGraw Hill, 2024.[CFA3] “Certificate in ESG Investing Curriculum”. CFA Institute. Edition 3. CFA Society of the UK, 2021.6
Geotechnical Engineering 3 Computational Modelling Assignment Coursework brief for ComputaConal Modelling 2024 -2025 Academic Year Please read the following instrucCons carefully InstrucCons for submission: Please submit the following required files and documents; the deadline for submission is 23:59 Wednesday 22nd December 2024 via Canvas. You will be penalised if you submit late (5% per working day). Each group needs to submit three files. You are advised to aDempt the exercise as soon as possible and prepare your report well before the deadline. Peer assessment is available (see below) and will be used to adjust final marks by no more than plus/minus 10%, to reflect an assessment of contribuKons across the group. All peer assessments are subject to final moderaKon by the module delivery team. If you do not submit a peer assessment, every team member will be given the same mark. The first file is a professional report of maximum 10 pages excluding Ktle page, contents, list of figures, list of tables, nomenclature, and references/bibliography – NO appendices are allowed (you will lose 5% per page over the limit). Submit the report file in PDF format. It is recommended to type the report using a word processing soSware e.g., MicrosoS Word, Latex. The report should be singled line, font Arial, minimum font size 11, and minimum margins of 2 cm from top, boDom, leS and right. On the Ktle page of your report, you must clearly add the name of the group members, your group number, and the following phrase followed by signatures from all members, “We are aware of and understand the University’s policy on plagiarism and we cer:fy that this assignment is our own work, except where indicated by referencing, and that we have followed good academic prac:ces” . Submissions without Ctle page as set out will not be considered and any subsequent delays will be count to late penalty marks. The second file of your submission is your ABAQUS files (both *.cae and *.inp). Please note that in total there are two ABAQUS simulaKons (one for Part 1 and one for Part 2). Each file size cannot exceed 6 MB. Only one member of the group needs to submit the required files. Group numbers and members are allocated separately. The file names should only contain the group number and the ABAQUS files should contain both the group number and the Part number separated by an underline. For example, group ‘23’ will submit five files of ‘23_P1.inp’, ‘23_P1.cae’, ‘23_P2.inp’, ‘23_P2.cae’ and ‘23.pdf’. Each group has a unique set of values for the parameters and so please make sure you choose the correct values otherwise you will lose marks. If you have any quesKons, please contact Prof Asaad Faramarzi ([email protected]). AllocaCon of marks: • Quality of the report including overall wri:ng, English, presenta:on, referencing: 15% • Results, and discussions: 50% • Numerical modelling: 35% Ques&on (Pile-Tunnel Interac&on): Introduc)on: Twenty-five years ago, the Mass Rapid Transit system called the North-East Line (NEL) was constructed in Singapore to improve public transportaKon connecKvity and cater to the growing populaKon in the north-eastern region of the country. However, during construcKon, the engineers faced a significant challenge, as the bored tunnelling acKviKes had to be carried out unavoidably close to the foundaKon of an exisKng flyover bridge. At the Kme, the government expressed concerned about the potenKal to impact of the tunnelling on the bridge’s foundaKon. As a result, numerous instruments were installed to monitor the foundaKon in order to observe any potenKal impacts. Strain gauges, seDlement points, and inclinometers were installed on the piles to monitor the addiKonal forces, seDlement, and horizontal movement of the piles, respecKvely. During tunnelling, a volume loss of 1.38% was observed as the tunnel passed beneath the foundaKon. The pile head seDlement was observed at 2.43 mm aSer tunnel passed through. Water table remains constant throughout the project at 3m below ground surface for both exisKng and the future tunnel construcKon. Figure 1 illustrates the other observed impact on the piles caused by the tunnelling acKviKes. Recently, the TransportaKon Authority of Singapore has planned to construct a new metro line. This new metro line will be built close to the exisKng NEL. Unfortunately, due to limited construcKon space, the upcoming bored tunnelling will need to be carried out near the exisKng piled foundaKon of Pier No.23 (see Plan View). However, this Kme, the government has strictly stated that the impact on the exisKng piles must be minimised. In response to this, alternaKve construcKon approaches have been proposed by engineers, including reducing volume loss, adjusKng the tunnel alignment verKcally and horizontally, and reducing tunnel diameter. As the engineer responsible for this project, you are required to conduct the following analysis and discuss the results. As a first step towards the necessary analysis, it is important to validate the FE analysis. This is oSen done through simulaKng an exisKng scenario which in this case can be the exisKng tunnel of Pier No . 20 and the data obtained from the instrumentaKon . As such, the following are expected to be completed: Part 1) To validate your FE analysis, create a 2D model (plane strain) of the exisKng tunnel (note Figure 2 and Figure 3) and compare the results with the graphs given above in Figure 1. Discuss any assumpKons you have made in your FE and the results in the context of accuracy and reliability. The geometry of the problem, as well as the geological soil profiles and parameters, can be found in the figures and tables below. For further informaKon, please see Pang 2006 thesis document available on CANVAS. Part 2) To develop an FE model for the new tunnelling line (Figure 4) based on the parameters relevant to your group and discuss the results in terms of accuracy and reliability. Also, discuss the results in the context of the miKgaKon approach taken and the impacts they may have on the ‘pile horizontal movement’, ‘induced axial force’, and ‘induced bending moment’.
Econ 1150 Problem Set 3 October 2024 1. We have the population regression: wagei = β0 + β1educationi + β2AF QTi + ui where individual i has hourly wage wagei , years of education educationi and AFQT score AF QTi . The Armed Forces Qualifying Test (AFQT) is a general aptitude test and is widely used as a measure of ability or fluid intelligence. You are trying to estimate the effect of education on hourly wage. However, in your sample regression, suppose you left out AF QTi and estimated: Assume that E(ui |educationi) = 0 and that β1 > 0, β2 > 0 and cov(educationi , AF QTi) > 0. (a) Using the Law of Iterated Expectations, show that cov(educationi , ui) = 0. (b) Using the covariance formula for your sample regression estimate βˆ1, show that βˆ1 con-verges to the sum of the true population parameter β1 and a bias term coming from leaving out AF QTi . (c) What is the direction of the bias? Will you overestimate or underestimate the relation-ship between wages and years of education? (d) Explain the economic intuition behind why this bias would occur. 2. Using data from the 2001-2002 United States House of Representatives, you estimate the following regression: where aauwi is a voting score that measures how supportive member of the House i is on women’s issues (e.g., reproductive rights and gender-based violence), propgirlsi is the propor-tion of member i’s children who are female, nchildi is member i’s total number of children, agei is i’s age, femalei ∈ {0, 1} is an equal to 1 if i is female, and whitei ∈ {0, 1} is equal to 1 if i is white. (a) If you’re trying to estimate the effect of having daughters on members’ voting behavior. on women’s issues, what is your outcome variable, which are your explanatory variables of interest and which are your control variables? (b) For a member with two children, what is the marginal effect of having a daughter? (c) Interpret the constant term and the rest of the coefficients. Are any of them not different from zero at the 5% significance level? (d) If your variable of interest is propgirlsi , which particular control variable, if not included, will bias your estimate of your effect of interest the most? Why? (e) How would you argue for that these results identify the causal impact of having daughters on voting behavior. on women’s issues? What is a possible omitted variable?
Global Data Analytics 34:816:637:01 Spring 2024 Catalog Description: The theoretical and operational selection of policy models used to assess the impacts of regional, national, and global socio-economic and environmental policies. Course Overview: The course Global Data Analytics covers many key materials and methods needed by public informatics students. Given the target set of students for the program, the course highlights two key topics of interest internationally: (1) the measurement by industry of energy resource use as well as the production of solid waste, effluents, and air pollution and (2) changes in levels of international exchange, particularly of the trade of goods. This is done by familiarizing students with UN protocols for measuring gross domestic product (GDP) and, hence, national accounts, particularly supply and use tables for detailed sectors and commodities. Through a series often (10) assignments, students learn how to read, write, manipulate, and update arrays in the R language, relying on their knowledge of MS Excel in the process. They will learn how to estimate and use subnational accounts as well. They will perform economic impact analysis, summarize results properly, identify industries important to an economy, discover proximate causes of change in an economy, and estimate the relative importance of an industry in an economy to global supply chains. The policy relevance of all aspects of the course is discussed. Students present subject matter of interest to them that is connected to course material. Course Learning Objectives include: • Developing a critical understanding of a core interindustry and interregional modeling method in public data analytics. • Demonstrating an ability interpret the modeling that they perform. • Understanding differences and similarities among key interindustry data sources as well as the analytical requirements and practical challenges encountered with them. • Gaining facility in basic computational matrix methods using the R language, which is translatable to related languages like Matlab, Python, and Octave. Course Learning Achievements - Students will: Demonstrate their ability to appropriately select and operationalize computable models for use with global data sets by: 1. defining the system of equations; 2. retrieving, entering, and analyzing required data; 3. estimating key coefficients and parameters; 4. testing sensitivity of the results to model assumptions; and 5. interpreting their results. Assessment: Students complete ten laboratory exercises that demonstrate their understanding of the theoretical foundations and best practice applications of various simulation models. (They are graded on the best eight.) They also will have demonstrated that they can program in the R language and document what they have done. Effectively communicate the results of the models in written and oral formats. Assessment: Student skills and confidence with modeling techniques develop through the exercises. Oral and written communication is assessed from student-led discussions and presentations. Grading Strategy: • Ten (10) lab exercises 80% • Class presentation & participation 20% Presentation: Class participants will have 10-15 minutes to present the preliminary content of their paper in class using 5-10 slides. This aspect of the course will be graded upon the content of the presentation material, which will be submitted online prior to the class hour in which it is orally presented. Grading will be largely based upon the use of novel visuals and the participant’s ability to reply to questions. Ability to present that material itself will be a secondary consideration. If, instead, you prefer to produce an application rather than present, the application must be extremely well internally documented and be accompanied by a User’s Guide that is more than five (5) pages long. Required Textbooks Davies, TM. (2016) The Book of R. No Starch Press, San Francisco. Miller, RE and Blair, PD. (2022) Input-Output Analysis: Foundations and Extensions, 3rd ed. Cambridge University Press: NYC. (a prior edition is available as a course file on Canvas) There is an RU library site for learning R that includes links to online video resources. Rutgers also has access to certified courses for R on LinkedIn.
Econ 323 Comprehensive Problem Set (CPS) Block 3. This problem set will be made up of three blocks. Block 1 and 2 are published previously. This block corresponds to material between MT2 and MT3. The CPS is optional. All three blocks are required to complete the CPS. If a student elects to do it, the CPS score may replace one of the three midterm exams. The CPS can replace a low midterm score, or if a student has not taken one of the exams for some reason, it can fill in for the missing test. It is fine if a student wishes to opt-out of MT3 and use the CPS to fill in the gap. As described in the syllabus and in class, the CPS is intended to be incrementally more challenging than the “standard” practice problems or exam problems we do routinely. You can think of it as a “stretch” assignment. All questions will have a small novelty or two that the student needs to work out on their own. The problems should then seem harder than our “standard” work. Answer each of the following two questions. 1. (50 points)We will imagine there being two markets for computers: Home computers and Business computers. The curves in these markets are linear, but I will not tell you what those formulas are [you don’t need to know them to answer the questions]. However, I will tell you that at the equilibrium, the elasticity of demand for home computers is -2.5, the elasticity of demand for business computers is -.90, and the elasticity of supply for computers for both purposes is 1. a. (10) A per-unit tax of $200 is imposed on the suppliers of computers. How much does the buyer price increase in each market? You can and should do this without knowing anything about the pre-tax equilibrium, if you use the elasticity formulas we covered in lecture. b. (10) Suppose the untaxed market equilibrium price and quantity in the home computer market are $850 and 10 million, respectively. In the business market, the untaxed market equilibrium price and quantity are $1200 and 15 million, respectively. What is the total revenue from the tax? c. (10) What is the deadweight loss of the $200 tax? d. (10) Comment on the relative sizes of the DWL in the two markets. Why are they different? e. (10) Imagine that you could tax the consumer and business markets separately. That is, a different tax for each market. Call these taxes th and tb.What would these taxes be if you wanted to generate the same level of revenue as part b, while minimizing the DWL? [To answer this part, it will greatly help if you use the formula for the marginal deadweight loss shown briefly in class. Since the derivation of that required some calculus and I don’t want you to have to derive it on your own, I’ll write it down right here for you to use: 2. (50 points) As described in lecture, the government can incentivize certain types of behaviors by allowing spending on those behaviors to be deducted from income. One example of this is the Mortgage Interest Tax Deduction (MITD). During a year, homeowners can subtract any spending on the interest payments from their mortgages from their income, lowering their income and their tax bill. In this problem we will explore this deduction in some detail, and how it may impact decisions on how much to borrow to purchase a house. This problem requires a fair bit of set up. I will walk you through it Take a look at the following mortgage calculator linked below to help answer the following. https://www.bankrate.com/calculators/mortgages/amortization-calculator.aspx Suppose the value of a property is $500k (this number is not necessarily the loan amount, and will change for different parts of the problem), the interest rate is 5% (approximately the correct interest rate as of this writing), and the length of the loan (the “loan term”) is 30 years. Set the loan start date for January. Leave any other values at the default settings. In the center of the web page of the calculator, you can choose to view “Chart” or “Schedule.” The Chart is nice, but you’ll need the Schedule because that has the numbers. Throughout the problem, the marginal tax rate for the borrower is 33%. Assume the individual has $500k cash on hand, and any of this money that remains after taking the mortgage/making mortgage payments is invested at the interest rate 3%. The earnings of the investments can be taxed each year at the marginal rate. The value of the property also grows at rate 3% per year. This means that as the borrower repays the loan and starts building principal, the value of that principal goes up at the same rate as other investments. However, the increased value of the property is not taxed, as the borrower doesn’t actually receive that increased value until or if they sell the property in the future. Finally, assume that the year’s mortgage payment is paid to the bank from cash on hand at the START of the year. This turns out to be important if we want to make comparisons. Homeowners are able to deduct the amount of interest paid on their mortgage when calculating taxes. NOTE: There are 4 sets of calculations you will make before you are finished. I recommend you set up your answers in an excel spreadsheet. It will make the calculations much easier and faster. i. (10) Suppose there is no MITD, and the homeowner borrows the full value of the property ($500k). According to the mortgage calculator, for the first year: a) How large is the annual mortgage payment? How much interest has been paid on the mortgage? b) How much principal has been paid off by the borrower? c) The amount of the annual mortgage payment from part a) was paid at the beginning of the year. That reduces the cash available to invest. How much cash gets invested? What is the pre-tax value of the cash investment at the end of the year? How much tax is owed on this investment? What is the after-tax value of the investment after one year? d) Add up the values of all the investor’s assets at the end of year one. How has this value changed over the year? ii. (10) Suppose there is a MITD, the homeowner borrows the full value of the property. Repeat a-d from part i. iii. (10) Not surprisingly, you hopefully saw in ii that the deduction is a boon to the homebuyer. Now, suppose the buyer makes a down payment of 20%, or $100k. Assume the MITD is not available. She invests the remaining cash at 3%. Repeat a-d from part i. in this case. iv. (10) Once again, suppose the buyer makes a down payment of 20%. Assume the MITD is available. Repeat a-d. v. (10) Comment on/compare your results for the different cases.
Lab Report 4 Litter Lab – Presentation Rubric TITLE SLIDE (5 pnts) £ Contains a short, informative title £ Identifies authors £ Identifies course (ENS 201: Fundamentals of Environmental Science I) £ Includes academic affiliation INTRODUCTION (15 pnts) £ Starts with a general introduction to the main topic. This should answer the question “Why do we care about this?” The information presented puts the study into a broader context. £ Second slide clearly identifies the objectives and hypotheses (at least 3) of the study. £ Time is 2-3 minutes (includes Title Slide). MATERIALS AND METHODS (15 pnts) £ Contains all relevant information to enable the reader to repeat the procedure £ Contains a description of the study sites and photographs so that readers can locate them and understand how they differ £ Contains a description of the assessment procedure used including the scoring system and how items were classified. £ Routine procedures are not explained £ No preview is given of how the data will be organized or interpreted £ Time is 2-3 minutes. RESULTS (25 pnts) £ No explanation is given for the results £ A minimum of three graphs is presented, with at least one additional finding that is not graphed presented (does not have to be graphed, but can be) £ Some text is included that identify major findings off graphs (Results are not just a series of graphs) £ All plots are complete, including axis labels, error bars, and any other information needed to understand the plot £ Figures and/or Tables include informative title and can be understood apart from the text. £ Time is 2-3 minutes. DISCUSSION (15 pnts) £ Results for each parameter are briefly restated and interpreted. The interpretation should also include the identification of any sources of litter or other factors that may be influencing the parameters you measured. £ An assessment of the overall condition of each site category is conducted. £ Potential mitigative measures are identified that could improve the condition of the site categories. £ Errors and inconsistencies are pointed out. Any sources of errors are explained and suggestions for future studies are included £ Time is 2-3 minutes. ACKNOWLEDGEMENTS (5 pnts) £ Thank anyone who helped with your study (group members, anyone who listens to you practice this talk). PRESENTATION STYLE. (10 pnts) £ Presenters speak clearly and at appropriate volume. £ Presenters appear well-prepared (have rehearsed talk) and have a professional demeanor £ Vocabulary used is appropriate and does not include too much scientific jargon or is not too conversational. £ The slide background and color schemes are visually appealing but are not difficult to see/read and are not distracting. £ Text is limited to key points listed as text bullets. £ The use of animations is limited so as not to be distracting. PEER REVIEW (10 pnts) £ Average score from 0-10 from the other 3 members of your group for your contribution to the presentation.
Report #3, Correlation & Regression INSTRUCTIONS Learning Goals: · Conduct correlation and regression analyses with JASP, to quantify bivariate associations and predictions, respectively. · Perform. descriptive and inferential bivariate statistics · Illustrate your analysis with bivariate descriptive statistics and plots. · Write your conclusions in both APA format and “newspaper” format, which is described, in your text, in posted lectures, and in the posted Hypothesis Testing Guides. Background Information: The research questions posed below pertain to the experiment studying interpersonal perception presented in class, outlined as follows: Population: All humans Sample: Students in PSYC51A Study Design: · All subjects are presented 9 IPT scenes which depict real social interactions; all subjects answer a multiple-choice question (3 options) about each social interaction (0-9 correct). · Subjects are randomly assigned to either view an audio-visual movie of the social interactions (verbal + lots of non-verbal information) or to read a transcript. (verbal + a tiny bit of non-verbal information) · All subjects, in both presentation modes, judged scenes with 3 types of social interactions – affiliation, deception, dominance (0-3 correct per type of interaction) Data: The data file is posted in the Moodle module “Data Analysis Reports – Instructions and Assignments” in a link entitled “IPT Data-REAL”. DO NOT use “IPT Data-PRACTICE” by mistake. Research questions: Report #2 analyzed the judgments about about the three types of social interactions - dominance, affiliation, or deception as three levels of one independent variable in a repeated measures design. The figure on the right is from Report#2, which found that interpersonal p erception accuracy differed significantly for different types of social interactions, F(1.88, 184.07)= 30.406, p
Module: The Music Business Please select one of the following prompts and write a 4,000-word essay response. The 4,000-word count does not include bibliography; final word count must be +/- 5%. Final assessment questions 1) Select an issue discussed in the module related to inequality, fairness, diversity, and/or inclusion in the music industry and imagine an alternative. This alternative can be in the form. of a technology you invent (such as an app) or a novel approach to contracts, labels, management, royalties, and so on. In your essay, you should consider how your technology/approach intervenes in relation to at least two of the following themes: making music, managing music, marketing music, and monetising music; and you should consider how your technology/approach is suitable (or not) across multiple geographic locations, including discussing how it might work in one case study inside and one case study outside Europe/North America. Tips for writing this essay: First thoroughly explain and dissect your chosen issue, citing appropriate sources and examples. Be sure to explain how this issue came to be (what is its history and context) and what are its consequences (consider real-world case studies). A thorough explication and analysis of the existing issue should comprise at least 25% but no more than 40% of the essay. 5. To what extent is digital technology transforming the music business? Consider carefully in what ways the digital turn presents new alternatives and to what extent it serves as a continuation of previous practices and structures of power. To support your answer, consider at least two of the following themes: making music, managing music, marketing music, and monetising music; and discuss at least two concrete case studies, one from inside and one from outside Europe/North America. 6) What roles do globalization and capitalism play in the development of music as ‘business’ and in the power asymmetries upon which this business relies? Discuss how globalization and capitalism intersect in at least two of the following themes: making music, managing music, marketing music, monetising music; and explore the implications of this intersection using at least two concrete case studies, one inside and one outside Europe/North America. 7) In what ways do the current legal logics (e.g. contracts, copyright, and so on) dominant in the North American and European music industries reflect global (universal) vs. culturally specific values. Discuss the arguments for and against current legal logics, considering how they manifest in at least two of the following themes: making music, managing music, marketing music, monetising music. To support your claim, discuss at least two case studies, one example inside and one example outside Europe/North America. Tips for writing this essay: Be sure to thoroughly describe and analyse the legal logics as they currently work in the North American/European music industries. This explication should comprise at least 25% but no more than 40% of the essay.
Unit 4 lab practical – Exploring wave interference Part 1 - Prior Knowledge Question: (Do this BEFORE starting the practical.) On a trip to the lake, a boy starts to skim stones across the water. He measures how many bounces he is capable of by the pattern left in the water by the stone. 1. The ripples created by the stones are an example of waves – in the space below describe what a wave is in your own words. 2. Using your understanding from the definition in part 1, explain how a skimming stone creates waves? Part 2 – Exploring waves: Waves come in many different forms: water waves, light waves, and sound waves are just some examples. The Wave interference sim allows you to explore the key features and characteristics of waves. You can explore the simulation by going to the following link: https://phet.colorado.edu/sims/html/wave-interference/latest/wave-interference_en.html 1. To begin: A. Select the ‘Waves’ simulator. B. Make sure that water waves is the mode selected (this is the default setting when you open it). C. Make sure the ‘Graph’ setting is switched on. 2. Play around with the sliders for frequency and amplitude - see how each of these affect the ripples being created. 3. Exploration and experimentation 1: Use your own words and captured pictures from the simulation to show how you can produce the following: a. Waves of the longest wavelengths possible. b. Waves of the Shortest wave possible. c. Waves of the tallest wave possible. d. Describe and explain your experiments to make waves of different wavelengths and heights (including what views and tools you used and why). Support your explanation with pictures form. the sim. 4. Exploration and experimentation 2: Use your own words and captured pictures from the simulation to show how you can measure the following: a. Period of the longest, shortest, and tallest wave possible. b. Speed of the longest, shortest, and tallest wave possible (recall our wave equation used to calculate velocity of a wave). c. Summarise your understanding of wave characteristics and behaviours by comparing the longest, shortest, and tallest waves. (Use the vocabulary words: wavelength, wave speed, amplitude, frequency). Part 3 – Waves and interference patterns: Part 3A: Sound waves: 0. Set up: a. Change the simulator to the ‘ interference’ sim. b. Set the type of wave as ‘sound’ waves. This is done by selecting the speaker on the right hand side. c. Make sure the ‘graph’ setting is switched off. d. Make sure only the ‘wave’ view is turned on. 1. Predict: Look at the following patterns: they were created by only adjusting one setting – which setting do you think it was? 2. Exploration and experimentation 1: Test your prediction by trying to replicate the patterns seen above – was your prediction from section 1 correct? If yes - explain how you created the patterns, if no – explain which settings you used to create the pattern and how. 3. Explore and experimentation 2: Do you think there is more than one way of replicating the patterns from above? Experiment with other settings and see if you can reproduce the patterns once again, explain your process and what did and didn’t work. Part 3b: Light waves 1. Set up: a. Change your wave type to light waves. This is done by selecting the laser pointer on the right-hand side. b. Change your light frequency to whatever colour you prefer. c. Make sure that ‘graph’ and ‘screen’ are both unselected. 2. Predict: Look at the diagram below, highlight where you think the points of constructive and destructive interference are found. 3. Exploration and experimentation 2: Create a similar wave pattern using your simulator, use the light detector tool (in the top right box) to find points of constructive and destructive interference. 4. Explain: How did you recreate the pattern, and how did you use the light detector tool – use the space below to describe your method and results. Part 4 – Diffraction and slit interference: 0. Set up: a. Change the simulator to the ‘Slits’ sim. b. Change your wave type to light waves. This is done by selecting the laser pointer on the right-hand side. c. Select ‘no barrier’ from the menu. 1. Describe: How are the waves we are using here, different to the waves generated in our previous simulation? 2. Predict: What do you think will happen to the wave when we introduce a barrier? 3. Describe: What is happening to the waves when we introduce a singular slit? A: Explain the phenomena in terms of the bending of light waves. (Hint: Recall Huygens’ theory of wavelets). B: What would we expect to see if we introduce a screen to the far end of the experiment? 4. Describe and explain: If we introduce a second slit, what happens to the pattern that we see on the screen? What is creating the pattern? 5. Calculate: Using the red light and the distance measuring tool – see if you can calculate the wavelength of the light. (Recall our relationship is as follows: (PS1 – PS2) = mλ)