Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] ACCT6003 Fundamental Analysis for Equity Investment Week 8 Assignment on financial reporting anal

ACCT6003 Fundamental Analysis for Equity Investment Week 8 Assignment on financial reporting analysis For this weekly assignment, you were required to demonstrate critical thinking in addressing the following two questions. Question 1 Consider the following thought experiment. There are two competing software development start-up companies that develop identical software. One of these companies is listed on the Australian Stock Exchange and the other one is listed on the New York Stock Exchange. The two companies do not manufacture anything and carry no inventory. They also have no material property, plant or equipment, so PPE is not a consideration at all. Also, the two companies sell only in cash and pay only in cash to all suppliers and buy assets in cash using annual equity injections. Assume that the two companies have completely identical business fundamentals in all operational and financing aspects. They have identical products and services, research and development operations, fixed assets, cost of capital, leverage, credit ratings, suppliers, customers, software engineers, shareholders, debtholders, and managers. The only difference between the two companies is the way in which they do their accounting. •   Company A publishes reports in Australia and follows the most conservative financial reporting strategy possible, by choosing accounting policies that would report the lowest possible revenue and assets and the highest possible expense and liabilities. •    Company B publishes reports in the US and follows the most aggressive financial reporting strategy possible, by choosing accounting policies that would report the highest possible revenue and assets and the lowest possible expense and liabilities. Required: (a) Explain the differences between the two companies described in the thought experiment with respect to their financial reporting. Also   explain how you would approach the problem of comparing the performance of these two companies. [2 marks] (b) Explain how the knowledge that a technology start-up company follows a conservative or aggressive financial reporting strategy can  be used to add value to an investor’s investment strategy. Present evidence to support your arguments. [1 marks] The word limit to answering Question 1 is 300 words, excluding any references, tables of graphs. Question 2 For the company that you have been assigned to analyse in the Final Take-Home Project, discover evidence and present an analysis  that demonstrates whether your   company follows a conservative or an aggressive financial reporting strategy. It is entirely up to you how to approach the analysis of this question. [3 marks] The word limit to answering Question 2 is 300 words, excluding any references, tables of graphs.

$25.00 View

[SOLVED] COMP9315 25T1 Assignment 2 Testing Instruction

COMP9315 25T1: Assignment 2 Testing Instruction Introduction The following gives some ideas for testing your Assignment 2 code. It does not give you a simple scripted collection of tests. We expect you to think more about the process of developing methods for verifying the correctness of your code, not simply running it through a testing harness like run_test.py. Warning: There are places in this system where you may observe different but still correct behaviour from your code compared to the examples below. Sometimes you should expect to see the exact output as shown. Sometimes, it acceptable to have variations on the shown output. For example, the hash value for a given set of attribute values will always be the same. Query results should contain exactly the same set of tuples as shown, although the order may be different. The numbers of buckets and pages in the stats output may vary depending on precisely how you implement splitting. Anywhere in the code that employs random number generation can lead to different results on different machines. In the supplied code, the only place that random numbers are used is in gendata. This means that if you run gendata on two different machines with the same set of command-line parameters, you may get a different output. If you want to make sure that you're loading the same set of data each time, run gendata and store the result in a file, then use that file to populate your databases. To give a stable set of data for you to play with, there are two files produced with gendata that you can download and use: cp /web/cs9315/25T1/assignments/ass2/tests/data0.txt Your/Test/Directory cp /web/cs9315/25T1/assignments/ass2/tests/data1.txt Your/Test/Directory In order to test your code more completely, you should also generate some data of your own. Especially generate very large data files, to ensure that your code continues to function properly at scale. Task 1: Multi-attribute Hashing After implementing multi-attribute hashing, you can test it as follows: $ rm -f R.* $ make clean && make $ ./create R 5 2 "0,1:1,1:2,1:3,1:4,1" -- info about the choice vector $ ./insert R < data0.txt hash(1,kaleidoscope,hieroglyph,navy,hieroglyph) = 10010011 11111010 10100101 01111101 hash(2,floodlight,fork,drill,sunglasses) = 01001000 11100111 10010111 10101110 hash(3,bridge,torch,yellow,festival) = 01011000 10101011 11101011 11101110 hash(4,chief,carrot,gasp,rainbow) = 10110011 00111011 00110010 01101111 hash(5,rope,air,crystal,treadmill) = 10011100 11001000 10010000 01001001 hash(6,solid,sandpaper,sandpaper,sword) = 11011111 11011111 11000010 01010000 hash(7,surveyor,apple,leg,vampire) = 11001101 11000101 01100111 10010001 hash(8,sword,carpet,television,post) = 11101000 01100101 10101011 11010110 hash(9,surveyor,bank,spotlight,maze) = 10000110 10000011 11110010 10010101 hash(10,woman,eraser,planet,planet) = 01111010 01010001 10010000 01011101 You should see exactly the hash values above. Unless you change the print statements at the end of the tupleHash() function, you will not see all of the attribute values. The stats command should produce the following output: $ ./stats R Global Info: #attrs:5 #pages:2 #tuples:10 d:1 sp:0 Choice vector 0,1:1,1:2,1:3,1:4,1:0,31:1,31:2,31:3,31:4,31:0,30:1,30:2,30:3,30:4,30:0,29:1,29:2,29:3,29:4,29:0,28:1,28:2,28:3,28:4,28:0,27:1,27:2,27:3,27:4,27:0,26:1,26 Bucket Info: # Info on pages in bucket (pageID,#tuples,freebytes,ovflow) [0] (d0,4,881,-1) [1] (d1,6,823,-1) If it shows extra pages, then your splitting is happening too soon. Task 2: Querying (Selection and Projection) After implementing the Query ADT, you can test whether your queries work as follows: $ rm -r R.* $ ./create R 3 2 "0,0:1,0:2,0:0,1:1,1:2,1" -- info about the choice vector $ ./insert R < data1.txt -- loads 2000 (id,v,w) tuples -- id values range from 1000 .. 2999 You can then test your Query ADT by using the query command to ask queries of different types. Note that since the results come out effectively in random order (we're using hashing), we pipe the output through the sort command. (** The order is not "random", but depends on how you do your splitting). $ ./query '*' from R where '1042,?,?' | sort 1042,child,compact-disc $ ./query '*' from R where '?,horoscope,?' | sort 1172,horoscope,slave 1494,horoscope,water 2534,horoscope,cycle 2538,horoscope,dress 2650,horoscope,surveyor 2997,horoscope,famine $ ./query '*' from R where '?,?,shoes' | sort 1169,leg,shoes 1225,chair,shoes 1266,rope,shoes 1350,finger,shoes 1624,school,shoes 1770,bee,shoes 1978,woman,shoes 2550,chair,shoes 2982,bed,shoes $ ./query '*' from R where '?,b%,d%' | sort 1949,boy,desk 2157,bird,database 2162,box,desk 2236,baby,drink 2238,bee,drill 2669,boss,double $ ./query '*' from R where '101%,?,?' | sort 1010,solid,sandpaper 1011,sandpaper,sword 1012,surveyor,apple 1013,leg,vampire 1014,sword,carpet 1015,television,post 1016,surveyor,bank 1017,spotlight,maze 1018,woman,eraser 1019,planet,planet $ ./query '1,2' from R where '101%,?,?' | sort 1010,solid 1011,sandpaper 1012,surveyor 1013,leg 1014,sword 1015,television 1016,surveyor 1017,spotlight 1018,woman 1019,planet $ ./query '2,1' from R where '101%,?,?' | sort leg,1013 planet,1019 sandpaper,1011 solid,1010 spotlight,1017 surveyor,1012 surveyor,1016 sword,1014 television,1015 woman,1018 $ ./query '2,1' from R where '%%101%,?,?' | sort leg,1013 mosquito,1101 planet,1019 sandpaper,1011 skeleton,2101 solid,1010 spotlight,1017 surveyor,1012 surveyor,1016 sword,1014 television,1015 woman,1018 $ ./query '2' from R where '101%,s%o%r%,?' | sort surveyor surveyor sword All of the above queries produce one or more results. The following set of queries (should) produce no results: $ ./query '*' from R where '42,?,?' $ ./query '*' from R where '?,wombat,?' $ ./query '*' from R where '?,?,wombat ' $ ./query '*' from R where '%,shoes,finger' $ ./query '*' from R where '1001,chair,shoes' If you want more test cases, it's easy to use grep on the data1.txt file to find what the results should be, e.g., $ grep '^1042' data1.txt -- gives same result as: $ ./query '*' from R where '1042,?,?' | sort $ grep ',shoes,' data1.txt -- gives same result as: $ ./query '*' from R where '?,shoes,?' | sort $ grep ',shoes$' data1.txt -- gives same result as: $ ./query '*' from R where '?,?,shoes' | sort $ grep ',pendulum,elephant' data1.txt -- gives same result as: $ ./query '*' from R where '?,pendulum,elephant' Task 3: Linear Hashing A simple example of how splitting might work. Note that there are many equally-valid variations: insert before splitting, insert tuples from overflow pages first, etc. We assume a very simple database that can hold c = 2 tuples in each page. We denote pages by: (#tuples, [list-of-tuples], overflow-page) We also show just the ID to identify tuples and use an extremely simple hash function. We also show just the lower-order 4 bits of hash values; higher-order bits are not relevant for such a small database. None of this affects the general principles being illustrated; they are equally applicable to the assignment. ID hash ID hash ID hash 001 ...0001 006 ...0110 011 ...1011 002 ...0010 007 ...0111 012 ...1100 003 ...0011 008 ...1000 013 ...1101 004 ...0100 009 ...1001 014 ...1110 005 ...0101 010 ...1010 015 ...1111 We assume that a split occurs whenever we try to insert, and (r % 5) == 0, where r is the total number of tuples. This split frequency is different to what's specified in the assignment, but, once again, it's the principles that matter, not the precise values. Initial state of database sp = 0, d = 1 Data Pages Overflow Pages [0] (0,[],-) [1] (0,[],-) After inserting first five tuples using 1 bit from hash value (d=1) sp = 0, d = 1 Data Pages Overflow Pages [0] (2,[002,004],-) [0] (1,[005],-) [1] (3,[001,003],0) While inserting 006, need to split Split bucket [0] Tuples to be redistributed: [002,004] First, clear bucket [0] Data Pages Overflow Pages [0] (0,[],-) [0] (1,[005],-) [1] (3,[001,003],0) Add new data page [2] Then re-insert [002,004] and insert [006] using 2 bits of hash value Then increment sp sp = 1, d = 1 Data Pages Overflow Pages [0] (1,[004],-) [0] (1,[005],-) [1] (3,[001,003],0) [2] (2,[002,006],-) Reminder: once sp = 1, pages [0] and [2] are indexed using 2 bits from the hash value, and page [1] is indexed using just 1 bit. I.e. if the lower-order bit of the hash is 1, then the tuple goes into bucket [1]; if the the lower-order bit is 0, then we consider 2 bits to determine whether the tuple goes into page [0] or page [2] (hash bits 00 or 10) Then insert tuples 007 .. 010 No splits sp = 1, d = 1 Data Pages Overflow Pages [0] (2,[004,008],-) [0] (2,[005,007],1) [1] (3,[001,003],0) [1] (1,[009],-) [2] (2,[002,006],2) [2] (1,[010],-) While inserting 011, need to split Split bucket [1] Tuples to be redistributed: [001,003,005,007,009] First, clear bucket [1] sp = 1, d = 1 Data Pages Overflow Pages [0] (2,[004,008],-) [0] (0,[],-) [1] (0,[],-) [1] (0,[],-) [2] (2,[002,006],2) [2] (1,[010],-) Add new data page [3] Then re-insert [001,003,005,007,009] using 2 bits of hash value Then insert 011 And increment sp, but has reached a new power of two, so reset sp = 0 and increment d sp = 0, d = 2 Data Pages Overflow Pages [0] (2,[004,008],-) [0] (1,[009],-) [1] (2,[001,005],0) [1] (1,[011],-) [2] (2,[002,006],2) [2] (1,[010],-) [3] (2,[003,007],1) Insert 012 .. 015 No splits sp = 0, d = 2 Data Pages Overflow Pages [0] (2,[004,008],3) [0] (2,[009,013],-) [1] (2,[001,005],0) [1] (2,[011,015],-) [2] (2,[002,006],2) [2] (1,[010,014],-) [3] (2,[003,007],1) [3] (1,[012],-) Giving examples with hundreds or thousands of tuples, using the settings for the assignment is unlikely to be helpful, given the variety of valid in ways of doing splitting. To illustrate the point, almost every correct solution will produce different stats output after inserting the same data.

$25.00 View

[SOLVED] System Identification Session

System Identification – Matlab Session Introduction: The aim of this session is to find a discrete time (z-operator) transfer function (TF) model using the systems identification approach which predicts dynamic behaviour of a motor. This model can be used to design a controller in the next lab session. The data necessary to find this model should be initially generated, for which a Simulink model of a motor should be run. As a precursor, task 1 aims to do the above, in a simpler system. Exercise Aims: 1.   To practice the practical use of system identification, using Matlab. 2.   To generate a suitable model of a DC motor (that can be used to design a PIP motor controller - i.e. to feed into lab 3 and the coursework). How to work: Work either individually or in pairs and keep notes for your own future reference in your notebook. Save copies of your files so that you can refer back to them and make a note of what is what so that it is useful if you need to come back to it. Make sure you record your final selected model – so that you have it for Lab 3. IMPORTANT – System Identification of the motor will be reported upon in your control coursework report – which will cover the System Identification and then the PIP Design (next lab). Files: Create a directory to work and download the files from CANVAS for this exercise. The Tasks to be undertaken Task 1 – “warm-up” - To test the methods on a known system similar to in lecture RD7. Note: You should spend max of 40 minutes on this task A Model of a “known system” is given. (open known_model.slx) 1.   Run System Identification tests by running the simulation and then estimating a large set of possible models -  ident_test_all.m will allow you to do this, have a good look through the code to see what it is doing 2.   Identify one or few suitable model structure(s) using the guidelines given in the lecture notes. 3.   Once you “select” a structure you can run ident_test.m to calculate the actual parameters for the transfer function and plot the response etc. a.   Estimate the parameters b.   Validate the model by running a different test (either change in the known_model_validation_expt.slx or use known_model.slx) Task 2 – Find the “best” TF model for a DC motor A Model of a DC motor is given which includes sensor noise. (open Motor_speed.slx). Run System Identification tests to   find TF between Volts and Speed. 1.   Select an appropriate sample time (the closed-loop bandwidth specified in the control part (Lab3) – will be at least 2Hz. In the lecture s we discussed that you'll need to sample somewhere between 30 and 100 times faster than the closed-loop bandwidth frequency. 2.   Select a suitable model structure using ident_test_all.m (This script will need to be modified slightly to allow you to do this). 3.   After correct model structure has been identified you can run ident_test.m (modify this too) to get the actual parameters and plot the response etc. a.   Estimate the parameters b.   Validate the model (Convince yourself you have the “correct” model....) – this may mean looking at 2 or 3 structures, comparing error plots etc. Make sure you have a record of your final model. You can save the A and B polynomials and the sample time dt. e.g. using the MATLAB command >> save mymodel.mat a b dt Optional Extra task: Repeat task 2 to find TF from Volts to Current – you will need to adapt the m-files to do this. Deliverables: The results will be discussed during the exercise (with support from tutors). The results will be part of the assessed coursework report that you will write (see separate coursework report guidance). Once you have successfully completed this Lab, the coursework will be straightforward (i.e. the system identification part of it is done, apart from generating graphs, diagrams and reporting).

$25.00 View

[SOLVED] ACCT 222 Department of Accounting and Information Systems

Department of Accounting and Information Systems ACCT 222 REVISION FOR EXAM STANDARD COSTS AND VARIANCE ANALYSIS Do the assigned reading from Chapter 11.3-11.6 and look again at the tutorial and homework questions. Portland Co. The Portland Company’s Ironton Plant produces pre-cast ingots for industrial use. Carlos Santiago, who was recently appointed general manager of the Ironton Plant, has just been handed the plant’s income statement for October 20XX. The statement is shown below:   Budgeted Actual Sales (5,000 ingots) $250,000 $250,000 Less variable expenses: Variable cost of goods sold *   80,000   96,390 Variable selling expenses 20,000 20,000 Total variable expenses 100,000 116,390 Contribution margin 150,000 133,610 Less fixed expenses: Manufacturing overhead   60,000   60,000 Selling and administrative expenses 75,000 75,000 Total fixed expenses 135,000 135,000 Net income (loss) $  15,000 $  (1,390) * Contains direct materials, direct labour, and variable overhead. Mr Santiago was shocked to see the loss for the month, particularly since sales were exactly as budgeted. He stated, “I sure hope the plant has a standard cost system in operation. If it doesn’t, I won’t have the slightest idea ofwhere to start looking for the problem.” The plant does use a standard cost system, with the following standard variable cost per ingot: Mr Santiago has determined that during the month of October the plant engaged in the following activity: •   Purchased 25,000 kilograms of materials at a cost of $2.95 per kilogram. •   Used 19,800 kilograms of materials in production. (Finished goods and work in process inventories are nominal and can be ignored.) •   Worked 3,600 direct labour-hours at a cost of $8.70 per hour. •   Incurred variable overhead costs of $1.20 per hour, or a total cost of $4,320 for the month. Required: 1.    Compute the following variances for the month: a.    Direct materials price and quantity variances. b.    Direct labour rate and efficiency variances. c.    Variable overhead spending and efficiency variances. 2.    Summarise the variances which you computed in (1) above, by showing the net overall favourable or unfavourable variance for the month. What impact did this figure have on the company’s income statement? 3.    Pick out the two most significant variances which you computed in (1) above. Explain to Mr Santiago the possible causes of these variances, so he will know where to concentrate his and his subordinates’ time. RELEVANT COSTS FOR DECISION-MAKING Part A Heather Alburty purchased a previously owned, eight year old Subaru car for $8,900. Since purchasing the car, she has spent the following amounts on parts and labour: New stereo system                         $1,200 Trick paint                                          400 New wide racing tyres                         800 Total                                                $2,400 Unfortunately, the new stereo doesn’t completely drown out the sounds of a grinding transmission. Apparently, the Subaru needs a considerable amount of work to make  it reliable transportation. Heather estimates that the needed repairs include the following: Transmission overhaul                   $2,000 Water pump                                        400 Master cylinder work                        1,100 Total                                                $3,500 In a visit to a used car dealer, Heather has found a five-year-old Ford in mint condition for $9,400. Heather has advertised and found that she can sell the Subaru for only $6,400. If she buys the Ford, she will pay cash, but she would need to sell the Subaru. Required: 1.    In trying to decide whether to restore the Subaru or buy the Ford, Heather is distressed because she already has spent $11,300 on the Subaru. The investment seems too much to give up. How would you react to her concern? 2.    Assuming that Heather would be equally happy with the Subaru or the Ford, should she buy the Ford, or should she restore the Subaru? (Source: Hansen & Mowen (7th ed.): 17-6) Part B Randy Stone, the manager of Specialty Paper Products Company, was agonizing over an offer for an order requesting 5,000 calendars. Specialty Paper Products was operating at 70 percent of its capacity and could use the extra business; unfortunately, the order’s offering price of $4.20 per calendar was below the cost to produce the calendars. The controller, Louis Barns, was opposed to taking a loss on the deal.  However, the personnel manager, Yatika Martin, argued in favour of accepting the order even though a loss would be incurred; it would avoid the problem of layoffs and would help maintain the community image of the company. The full cost to produce a calendar follows: Direct materials $1.15 Direct labour 2.00 Variable overhead 1.10 Fixed overhead 1.00 Total $5.25 Later that day, Louis and Yatika met over coffee. Louis sympathised with Yatika’s concerns and suggested that the two of them rethink the special-order decision. He offered to determine relevant costs if Yatika would list the activities to be affected by a layoff. Yatika eagerly agreed and came up with the following activities: notification costs to lay off approximately 20 employees ($25 per laid- off employee); increased costs of rehiring and retraining workers when the downturn was over ($150 per new employee). Required: 1.    Assume that the company would accept the order only if it increases total profits. Should the company accept or reject the order? Provide supporting computations. 2.    Consider the new information of activity costs associated with the layoff. Should the company accept or reject the order? Provide supporting computations. (Source: Hansen & Mowen (7th ed.): 17-11) Part C Orly Company produces two models of an industrial product that require the use of a laser-operated drilling machine. The laser-operated drilling machines owned by the firm provide a total of 12,000 hours per year. Model A-4 requires six hours of machine time, and Model M-3 requires three hours of machine time.   Model A-4 has  a  contribution margin of $24 per unit, and Model M-3 has a contribution margin of $15. Required: 1.    Calculate the optimal number of units of each model that should be produced, assuming that an unlimited number of each model can be sold. 2.    Calculate the optimal number of units of each model that should be produced, assuming that no more than 2,500 units of each model can be sold. (Source: Hansen & Mowen (7th ed.): 17-17) STRATEGY, PRICING AND REVENUE MANAGEMENT Do the assigned reading and look again at the lectures slides and the tutorial questions. FINANCIAL PERFORMANCE MEASUREMENT Do the assigned readings from Section 14.4 and Chapter 18 and look again at your lecture notes and the tutorial and homework questions. Automobile Products The manager of a division that produces add-on products for the automobile industry has just been presented the opportunity to invest in two independent projects.  The first is an air conditioner for the back seats of vans and minivans.  The second is a turbocharger.  Without the investment, the division will have average assets for the coming year of $28.9 million and before-tax income of $3.179 million.  The outlay required for each investment and the expected operating incomes are as follows: Air Conditioner       Turbocharger Before-tax operating income $ 67,500 $ 89,700 Outlay $750,000 $690,000 Corporate headquarters will borrow up to $1.5 million for the automotive add-on division for further investments. The amount borrowed will be through unsecured bonds at a rate of 12 percent. The marginal tax rate is 25 percent. Required: 1.    Compute the Return on Investment (ROI) for each investment project. 2.    Compute the budgeted divisional ROI for each of the following four alternatives: a.    The air conditioner investment is made. b.    The turbocharger investment is made. c.    Both investments are made. d.    Neither additional investment is made. Assuming that divisional managers are evaluated and rewarded on the basis of ROI performance, which alternative do you think the divisional manager will choose? 3.    Compute the Residual Income (RI) for each investment project.   Based on RI, which projects should the managers choose? 4.    Suppose that the borrowing must be for the entire $1.5 million. Calculate the EVA of the two investments taken as a package. Based on EVA, are the investments profitable? TRANSFER PRICING A. Mano Enterprises The Box Division of Mano Enterprises produces boxes that can be sold externally or internally to Mano's candy division. Sales and cost data on the most popular box are given below: Unit selling price $0.95 Unit variable cost $0.60 Unit fixed costa $0.15 Practical capacity 500 000 units a$75 000/500 000 During the coming year, the Box Division expects to sell 350 000 units of this box.  The Candy Division currently plans to buy 150 000 units of the box on the outside market for $0.95 each: Neil Hansen, manager of the Box Division, has approached Martha Rasmussen, manager of the Candy Division, and offered to sell the 150 000 boxes for $0.94 each. Neil explained to Martha that he can avoid selling costs of $0.02 per box and that he would split the savings by offering a $0.01 discount on the usual price. Required: 1.    What is the maximum transfer price that the Candy Division would be willing to pay? What is the minimum transfer price that the Box Division would be willing to accept? Should an internal transfer take place? What would be the benefit (or loss) to the firm as a whole if the internal transfer takes place? 2.    Suppose Martha knows that the Box Division has idle capacity. Do you think that she would agree to the transfer price of $0.94? Suppose she counters with an offer to pay $0.85. If you were Neil, would you be interested in this price? Explain with supporting computations. B.   Jacox Company Jacox Company's Can division produces a variety of cans that are used for food processing.  The Nut Division of Jacox buys nuts, shells, roasts, and salts them, and places them in cans.  It sells the cans of roasted nuts to various retailers.   The most frequently used can is the 500g size.   In the past, the Nut Division has purchased these cans from external suppliers for $0.60 each.  The manager of the Nut Division has approached the manager of the Can Division and has offered to buy 200 000 500g cans each year.  The Can Division currently is producing at capacity and produces and sells 300 000 500g cans to outside customers for $0.60 each. Required: 1.    What is the minimum transfer price for the Can Division?  What is the maximum transfer price for the Nut division?  Is it important that transfers take place internally?  If transfers do take place, what should the transfer price be? 2.    Now assume that the Can division incurs selling costs of $0.04 per can that could be avoided if the cans are sold internally.  Identify the minimum transfer price for the Can Division and the maximum transfer price for the Nut Division.   Should internal transfers take place? If so, what is the benefit to the firm as a whole? 3.    Suppose you are the manager of the Can Division.  Selling costs of $0.04 per can are avoidable if the cans are sold internally. Would you accept an offer of $0.58 from the manager of the other division? How much better off (or worse off) would the Can Division be if this price is accepted? C.   Adler Industries Adler Industries is a vertically integrated firm with several divisions that operate as decentralised profit centres.  Adler's Systems Division manufactures scientific instruments and uses the products of two of Adler's other divisions.  The Board Division manufactures printed circuit boards (PCBs). One PCB model is made exclusively for the Systems Division using proprietary designs, while less complex models are sold in outside markets.  The products of the Transistor Division are sold in a well-developed competitive market; however, one transistor model is also used by the Systems Division.  The costs per unit of the products used by the Systems Division are presented below:   PCB Transistor Direct material $2.50 $0.80 Direct labour 4.50 1.00 Variable overhead 2.00 0.50 Fixed overhead 0.80 0.75 Total cost $9.80 $3.05 The Board Division sells its commercial product at full cost plus a 25 percent mark-up and believes the proprietary board made for the Systems Division would sell for $12.25 per unit on the open market.  The market price of the transistor used by the Systems Division is $3.70 per unit. Required: 1.    What is the minimum transfer price for the Transistor Division?  What is the maximum transfer price of the transistor for the Systems Division? 2.    Assume the Systems Division is able to purchase a large quantity of transistors from an outside source at $2.90 per unit.  Further assume that the Transistor Division has excess capacity.  Can the Transistor Division meet this price? 3.    The Board and systems divisions have negotiated a transfer price of $11 per printed circuit board. Discuss the impact this transfer price will have on each division. NON-FINANCIAL PERFORMANCE MEASUREMENT Do the assigned reading from Chapter 19 and look again at the lecture, tutorial and homework questions. A. Balanced scorecard measures Listed below are a number of balanced scorecard measures. a.    Number of new customers b.    Percentage of customer complaints resolved with one contact c.    Unit product cost d.    Cost per distribution channel e.     Suggestions per employee f.     Quality costs g.    Product functionality ratings (from surveys) h.    Cycle time for solving a customer problem i.     Strategic job coverage ratio j.     On-time delivery percentage k.    Percentage of revenues from new products Required: 1.    Classify each performance measure according to the following: •   perspective (e.g. customer or learning and growth) •   financial or nonfinancial •   subjective or objective •   external or internal, and •   lead or lag 2.    Discuss why it is sometimes difficult to classify these measures as lead or lag.  Now, pick any two measures where you have difficulty explaining whether they are lead or lag, and explain when they would be lead and when they would be lag measures. B. Lee Corporation's Balanced Scorecard Lee Corporation manufactures various types of colour laser printers in a highly automated facility with high fixed costs. The market for laser printers is competitive. The various colour laser printers on the market are comparable in terms of features and price. Lee believes that satisfying customers with products of high quality at low costs is key to achieving its target profitability. For 20XY, Lee plans to achieve higher quality and lower costs by improving yields and reducing defects in its manufacturing operations. Lee will train workers and encourage and empower them to take the necessary actions. Currently, a significant amount of Lee’s capacity is used to produce products that are defective and cannot be sold. Lee expects that higher yields will reduce the capacity that Lee needs  to  manufacture  products.   Lee  does  not  anticipate  that  improving  manufacturing  will automatically lead to lower costs because Lee has high fixed costs. To reduce fixed costs per unit, Lee could lay off employees and sell equipment, or it could use the capacity to produce and sell more of its current products or improved models of its current products. Lee’s balanced scorecard for the just completed fiscal year 20XX follows: Required: 1.    Was Lee successful in implementing its strategy in 20XX? Explain your answer. 2.    Is Lee’s balanced scorecard useful in helping the company understand why it did not reach its target market share in 20XX? If it is, explain why. If it is not, explain what other measures you might want to add under the customer perspective and why these measures are necessary. 3.    Would you have included some measure of employee satisfaction in the learning and growth perspective and new product development in the internal business process perspective? That is, do you think employee satisfaction and development of new products are critical for Lee to implement its strategy? Why or why not? Explain. 4.    Is there a cause-and-effect linkage between improvement in the measures in the internal business process perspective and the measure in the customer perspective? Why or why not? Explain. 5.    What problems, if any, do you see in Lee improving quality and significantly downsizing to eliminate unused capacity? SUSTAINABILITY & MANAGEMENT ACCOUNTING You will have been told in class and on Learn which readings are important and necessary - read them and take some summary notes.  Also read through the lecture notes and think about the issues discussed in class.

$25.00 View

[SOLVED] MARK205 Assessment 4 Research Report

MARK205 Assessment 4 Research Report 1. Introduction · This should be an updated version of your introduction, if required. · Introduce the research topic and provide background and context of the research topic. Including the organisation. This can be achieved through a situational analysis including the following: o A microenvironmental (SWOT) analysis and a macroenvironmental (PESTLE) analysis o A consumer analysis, i.e. identify who the main target market segment is for (a) that sector and (b) the client. · Notes: o This usually draws from non-academic sources of literature. o Work within the scope of your project, i.e. based on pragmatic considerations – the amount and type of information available to you & the word limitations of your report. o This overview is your formative research, usually based on secondary data and sources. · State the aims, objectives, and overall research question.  2. Literature Review and Hypotheses Development · This should be an updated version of your literature review and hypotheses, if required. · Review relevant theories, concepts, and existing research on the topic. · Identify research gaps and limitations in the literature to demonstrate the need for your research. · Note: o Review the academic literature, organising the findings into key themes. Each theme should be presented in each own section using numbered sub-headings. Each thematic section should end with identification of a gap in the knowledge base. Each research gap should lead to the development of a sub-research question. Each sub-research question should be answered by a testable hypothesis. 2.1. [Theme 1] Write about this theme. Lead into the gap statement: this leads to the first gap in the knowledge base: Gap 1: [gap statement] To address this gap, the following sub-research question has been developed: RQ1: [research question] To answer RQ1, the following hypothesis will be tested: H1: [hypothesis statement] 2.2. [Theme 2] Write about this theme. Lead into the gap statement: this leads to the first gap in the knowledge base: Gap 2: [gap statement] To address this gap, the following sub-research question has been developed: RQ2: [research question] To answer RQ2, the following hypothesis will be tested: H2: [hypothesis statement] 2.3. [Theme 3] Write about this theme. Lead into the gap statement: this leads to the first gap in the knowledge base: Gap 3: [gap statement] To address this gap, the following sub-research question has been developed: RQ3: [research question]  To answer RQ3, the following hypothesis will be tested: H3: [hypothesis statement] 3. Methodology This should be an updated version of your methodology, if required. 3.1. Research design State the research design you are adopting – exploratory or conclusive research? If conclusive, is it descriptive or causal research? 3.2. Research approach & method – based on your research design, is this a qualitative, quantitative, or mixed methods approach? Then, what method(s) that you are going to use? 3.3. Sampling This is based on your identification of the key target market segment (based on the four segmentation bases; demographic, geographic, psychographic, behavioural). This should include: · Sample criteria - what characteristics must your study participants fulfil? · Sampling frame. - how will you identify participants? · Sampling technique - probability or non-probability sampling? And what specific sampling technique within that category? 3.4. Procedure Explain the sampling process – how will you invite participants to do your research? 3.5. Measures Update this section to suit the class survey. Identify the scale measures for the conceptual constructs in the class survey. Show the original and adapted scale items, identifying the original source. Also acknowledge demographic questions, and any other questions you are including in the class survey. Table 1: Original and adapted scale items for [construct name] [Construct name] (reference) Original item Adapted item             Table 2: Original and adapted scale items for [construct name] [Construct name] (reference) Original item Adapted item             · Notes: o Add as many tables for the number of scale measure items you are using. o This section describes the primary research you are going to do, which is your summative research. 4. Results · Conduct the relevant statistical analysis to test each of your hypotheses. · Present the results here. 4.1. Descriptive statistics Describe your sample here. 4.2. Hypothesis 1 testing Present the results of your hypothesis testing here. State the analysis technique that you used, justifying this against your corresponding research question. Refer to the appropriate outputs. State whether the hypothesis was supported or unsupported. 4.3. Hypothesis 2 testing Present the results of your hypothesis testing here. State the analysis technique that you used, justifying this against your corresponding research question. Refer to the appropriate outputs. State whether the hypothesis was supported or unsupported. 4.4. Hypothesis 3 testing Present the results of your hypothesis testing here. State the analysis technique that you used, justifying this against your corresponding research question. Refer to the appropriate outputs. State whether the hypothesis was supported or unsupported. 5. Implications and Conclusion Summarise the key findings and their significance. Interpret and explain their implications. Relate the findings to the existing literature and theorical perspectives. Use tables, graphs, or visualizations to illustrate your results. Reflect on how the research objectives and aims have been addressed. Provide actionable recommendation[s] derived from the research findings. Highlight the contributions of the study and its implications for practice [and theory]. Discuss the limitations of the study and any potential biases.  

$25.00 View

[SOLVED] Assignment 3 HTML

Assignment 3: HTML (5%) Based on what we have learn about HTML create: 1.    5 pages of your CV using HTML: a.    HOME, b.    EDUCATION, c.    SKILL, d.   ACHIEVEMENT & e.    WORK EXPERIANCE 2.    Create a link to all the pages 3.    Create a link to UKM and your BLOG 4.   Your pages must have a.    Original text, b.   picture and c.    video. Mark will base on fulfil all the task creatively and informatively.

$25.00 View

[SOLVED] STATS 779 Professional Skills for Statisticians 2018

Department of Statistics STATS 779: Professional Skills for Statisticians Test: May 29, 2018 2:00 pm–6:00 pm. INSTRUCTIONS * Total marks = 90. * Attempt all questions. * Note: Some questions are open-ended and it may not be clear how extensive your answer should be. Do not write long answers to these questions. You should be able to answer any question of this type in a few paragraphs at most, or within half a page. 1 The National Identity Card (NIC) number of individuals in Sri Lanka has ten unique characters. Positions 1–9 are numerical and position 10 is an alpha character. The following numbering system is used to define the first five characters: • Positions 1–2: the year of birth. For example, 81 indicates that the birth year is 1981. • Positions 3–5: the number of the day in the year on which the person’s birth date falls. A male would be assigned the number 1–366 and a female the number 501–866. For example, a male born on 5 January is represented by 005; a female born on the same day is represented by 505. Example: The first five characters of the NIC for a male born on 5 January 1981 would be 81005; a female born on that same date would be 81505. Note: The column C shows the number of the day in the year on which the person’s birth date falls. A number between 1–366. Write down the Excel worksheet formula(s) to be entered in: a cell B2 that extracts the birth year of the individual from the given NIC number. For example, the output in cell B2 should be 1999. b cells D2 and E2 to obtain the birth month and day, respectively. c cell F2 to obtain the date of birth. The output in cell F2 should follow the dd/mm/yyyy format. d cell G2 to obtain the gender (i.e., FEMALE vs MALE). General Tips: You will need to use the following Excel functions: LEFT and MID functions are used to extract one or more characters from a string, either starting from the left-hand side, middle, respectively, of the string. The syntaxes of the functions are: LEFT(text, [num_chars]) MID(text, start_num, num_chars) text   Required. The text string that contains the characters you want to extract from. start_num   Required. The position of the first character you want to extract. num_chars   Optional for the LEFT function. Specifies the number of characters you want. The VALUE function is used to convert a text string that represents a number to a number. The syntax of the function is VALUE(text) where: text   Required. Text enclosed in quotation marks or a reference cell containing the text you want to convert. MONTH and DAY functions can be used to find the birth month and day of the individual. The syntaxes of the functions are MONTH(serial) and DAY(serial) where: serial   Required. A number in the date-time code. CONCATENATE is used to join several text strings into one text string. The syntax of the function is CONCATENATE(text1, [text2], ...) where: text1   Required. text1, text2, ... are 1 to 255 text strings to be joined into a single text string and can be text strings, numbers, or single-cell references. [10 marks] 2 Amanda learned in her second year about the non-technical interpretation of the 95% confidence interval of the mean. If we compute a 95% confidence interval of the mean for each sample taken from the population, then 95% of the intervals will capture the unknown population mean. Amanda wants to visualize this as in Figure 1. You have been asked to help her with writing appropriate R code. Partial code is shown in Figure 2. Use the given variable names to write R commands: a In lines 20 and 23, to compute the upper and lower confidence limits, respectively, of each sample generated. Hint: The 95% confidence interval (assuming a Gaussian distribution) is given by where ¯x and s are sample mean and standard deviation, respectively, of n observations, α is the significance level, and tα/2,n−1 is the t-critical value from the t-distribution with n − 1 degrees of freedom. b In line 26 to plot a blue vertical line for the population mean. c In line 29 to annotate the line drawn in part 2b. Hint: The mtext function is useful. d In lines 32–40 to draw the confidence interval for each sample. Set col = "gray" if the confidence intervals capture the unknown population mean and set col = "red" other-wise. [11 marks] Figure 1: Non-technical explanation of the 95% confidence interval. Figure 2: Partial R code. 3 A general system of m linear equations with n unknowns can be written in matrix notation as Ax = b where A is an m × n matrix of coefficients, x is an n × 1 vector of unknowns and b is an m × 1 vector of constants. If the matrix A is square (i.e., m = n) and has full rank (i.e., determinant of the matrix A is non-zero), then the system has a unique solution given by x = A−1 b. An incomplete R function is given in Figure 3. Fill the appropriate R commands in lines 4, 8, 12, and 16. Hint: You can use the det function to find the determinant of matrix A. [5 marks] 4 Tom and Jerry have been tasked to count the number of times the word “as” appears in a given .txt file. Tom found that there are 31 matches, but is not willing to show his regex pattern. Jerry found 72 matches by setting pattern = "[aA]s(\s|$)" in the gregexpr function. The lecturer also said that Tom’s answer is correct. a Write R code which uses a regular expression to find the correct number of occurrences of the word “as”. Assume that contents of the .txt file have been read into a character vector called lines. 1   # Amat: matrix of coefficients 2   # Bmat: vector of constants 3   leqDir class(xtbl) [1] "xtable" "data.frame" > str(xtbl) Classes 'xtable' and 'data.frame': 5 obs. of 1 variable: $ x: int 34 40 15 10 1 - attr(*, "caption")= chr "xtable example" - attr(*, "label")= chr "tab:xtbl" - attr(*, "align")= chr "r" "r" - attr(*, "digits")= num 0 2 - attr(*, "display")= chr "s" "d" What will be the effect of the following snippets of text when the .Rnw file is processed using knitr and pdfLATEX: a = xtbl @ b = xtbl @ NOTE: You may wish to examine the help pages for the package xtable before answering this question. [6 marks] 11 The results of this years Giro d’Italia cycle tour race are in the file GiroResults.csv, which has the form. shown in Figure 6. Figure 6: Top of GiroResults.csv Rider names are not more than 30 characters long, and team names are not more than 50 characters long. In the column headed Time the first entry (for Chris Froome) gives the total time taken to ride the the 21 stages of the tour, in hours, minutes and seconds. The other figures in that column are the additional times that the various riders took to complete the tour. So for example, George Bennett of New Zealand took an additional 13 minutes and 17 seconds compared to Froome, that is, his total riding time was 89 hours, 15 minutes and 56 seconds. a Write MySQL code to create a table called giro for this data set. Do not create an automatically incremented variable as the primary key for the data. Instead specify the rider name as the primary key. b Write MySQL code to read the data from GiroResults.csv into the table giro. c Alter the table giro by adding a TIME variable called Difference. d Update the column by first setting Difference to be equal to the Time column and then update the first element of the Difference column (the entry for Froome) to take the value ’00:00:00’. If this has been done correctly then the Difference column will contain all the time differences from Froome’s time. e Write MySQL code to produce a table showing the average time difference by team, in minutes rounded to 2 decimal places, ordered from smallest to largest. NOTE: To carry out calculations involving times, first convert times to seconds by ap-plying the function TIME_TO_SEC. [10 marks]

$25.00 View

[SOLVED] PHYSICS STUDENT EXPERIMENT EXAMPLE

PHYSICS STUDENT EXPERIMENT EXAMPLE Rationale Electromagnets are temporary magnets created by passing an electric current through a solenoid. When a charge is moving, a magnetic field is formed around it. The magnetic field created by the moving charge in a solenoid is focused in a uniform. direction, giving it the properties of a magnet. An original experiment investigated the force exerted by an electric charge moving through a magnetic field, on the magnets creating the field. The magnetic field of moving charge (current) interacts with the magnetic field of the magnet creating a force perpendicular to the direction of the current and magnetic field. It was found that the force exerted by the charge is directly proportional to the rate of charge moving through (current). Research into the phenomena found the force could be described by Where I is current, L is length, B is the magnitude of the magnetic field (Cooper, 2012). The relationship between current and the force exerted by the current on magnets lead to the question of the relationship between current and the force exerted by the current on an unmagnetised ferrous material. Similarly, the magnetic field created by a current will interact with an unmagnetised ferrous material to create a force. In a piece of non-magnetised ferromagnetic material, the domains are randomly aligned; however, when an external magnetic field is applied, the magnetic domains align and the piece of metal will temporarily act as a magnet, creating a magnetic field (Nave, 2019). Figure 1. diagram showing the effect of a magnetic field on unmagnetised ferromagnetic material Adapted from (Nave, 2019) This magnetic field created by the ferrous material interacts with the original magnetic field of the moving charge creating a force. The interaction of this magnetic field with the magnetic field created by the current is weaker as the magnetisation of the ferrous material is lower (Clarke, 2010). Due to this consideration, a solenoid is used to focus the magnetic field of the current, creating a discernible impact on the force exerted. Research has found that the force exerted by an electromagnet could be found by the formula: Where F represents force, I is current, n is the numbers of loops in the solenoid, μ0 is the magnetic permeability of a vacuum, A is the area of the solenoid and g is the distance between the solenoid and the metal. As such, this experiment modifies the original experiment by redirecting it towards investigating the relationship between the current of an electromagnet and the force it exerts. If n, A and g are kept constant then the data should show a theoretical relationship of Research question What is the relationship between current in a solenoid and the force it exerts on unmagnetised ferromagnetic material (Iron) when solenoid density and distance to solenoid are constant? Original Experiment The original experiment investigated the force exerted by a charge moving through a wire in the magnetic field of a magnet at different currents (1A to 0.1A in 0.3A increments). Two trials were conducted at each current. No theoretical values were calculated as the magnitude of the magnetic field could not be found. As current increased, force exerted increased linearly. Modifications To collect sufficient, reliable and valid data, the methodology was: • Redirected by 1. Measuring the force exerted by a moving charge in a solenoid on a nonmagnetic ferrous metal. • Refined by 1. Maintaining a gap of 18cm between the top of solenoid and metal. 2. Measuring the force using a scale (±0.01g or ± 0.098N). A scale allows a precise measurement, improving the reliability of the data. 3. Using an ammeter (±0.01A) to measure current and rheostat to alter current, allowing trials to be conducted at precise intervals. This configuration gives greater control and precision of the current, improving reliability. 4. Using a ferrous material of iron which has a magnetic permeability of approximately 2000. The high permeability improves the validity of the experiment as it is closer to the theoretical assumption of permeability. 5. Measuring at five different currents (0.5, 1, 1.5, 2, 2.5A) to ensure that trends, patterns and relationships could be more easily identified, improving validity of findings. 6. Conducting five trials to ensure the reliability of the data. • Extended by 1. Accounting for the distance and the properties of the solenoid in theoretical calculations. This improves validity as data could be compared. 2. Using a 700 loop, 11.3 cm2 solenoid to create a magnetic field that can exert a detectable force on the iron. A high number of loops and a greater area creates greater magnetic fields and thus a greater force which could be more easily observed and reduces uncertainty from instruments (same absolute but lower percentage uncertainty). This will, therefore, improve the reliability of the experiment. Management of risk To ensure the safety of the participant and address any ethical issues involved in the experiment, the following considerations were identified and addressed: • The electricity used in the experiment may pose a hazard. This hazard could be mitigated by preventing contact with power points and open wires with electricity on. No water is to be brought within the laboratory. • The magnetic field created by the solenoid may pose a risk to electronics. Maintain at least 1 meter between all electronics and the solenoids. Data is to be recorded on paper before transferred to digital media. • The solenoid may overheat and pose dangers to participants. To avoid overheating, the solenoid is not to be on for prolonged periods and be allowed to rest for a minute between every trial.

$25.00 View

[SOLVED] Materials Engineering MCEN90014Haskell

Materials Engineering (MCEN90014) TC Practice Workshops 1. Gibbs free energy of a thermodynamic system helps to calculate several thermodynamic parameters. Given the Gibbs free energy function (G), drive equations for calculation of volume, entropy, enthalpy, internal energy of a thermodynamic system.   2. Calculate the phase diagram of Iron-Carbon binary alloy i.e. Temperature vs composition of carbon diagram using Thermo-Calc. Consider range of carbon from 0 to 1 wt% and the corresponding temperature range between 500 and 1800 K. 3. For the phase diagram calculated in Q2, add 5 wt % Cr and re-calculate and plot the new phase diagram (Temp vs conc of carbon). Compare and contrast the results for Q2 and Q3. 4. Calculate the phase diagram for Al-Si alloy. Consider a temperature range of 200 to 700C and vary composition silicon from 0 to 10 wt%. Ø Identify the appropriate thermodynamic database Ø Indicate/label the different phase region Ø What is the maximum composition of Si for austenite (FCC) Al-Si alloy   5. Calculate the Cu-Sn binary phase diagram. Consider a temperature range of 500 to 1200C. Ø Identify the appropriate thermodynamic database Ø Indicate/label the different phase region Ø Estimate the melting point of Cu-38 wt% Sn alloy 6. Calculate the Cu-Ni binary phase diagram. Consider a temperature range of 1000 to 1500C. Ø Identify the appropriate thermodynamic database(s) Ø Identify the liquidus and solidus lines. Ø Indicate/label the different phase regions Ø Calculate the phase fraction at 1232C for an alloy of Cu-38wt%Ni and compare the result with the one obtained from lever rule. Use data from TC for Lever rule. 7. Plot the phase fraction of an Fe-0.25 wt % C alloy as a function of temperature. Consider temperature range of [800 to 1400C]. 8. Using the Fe-C system, generate the equilibrium phase diagram for the composition range 0 - 6 wt% C and temperature range 300 - 1600 °C (new). Ø Identify the eutectoid and eutectic points. Ø At 0.8 wt% C and 727 °C, what phases are present? Ø What is the phase fraction of graphite at 4.4 wt% C and 1147 °C? 9. Determine the phase compositions for an Iron-Carbon binary alloy (C = 3 wt%) at 1200K. Also determine the Enthalpy and total Gibbs energy of the alloy at 1200K. 10. Set the composition to Fe - 18Cr - 0.8C (wt%) and calculate equilibrium phases at 1200 °C (new). Ø Which phases are stable at this condition? Ø How does the phase fraction of carbide change when temperature is decreased to 900 °C? Ø At what temperature does sigma phase first appear? 11. Perform. a single point equilibrium calculation using Thermo-Calc for an Fe-C binary alloy at C = 2.5 wt% and T = 1500K.   Ø Determine the stable phases at 1500K Ø Determine the fraction of stable phases at 1500K. 12. Fix composition: Fe–30Ni–0.5C (wt%). Calculate equilibrium at 600 °C and 1000 °C (new). Ø What are the stable phases at both temperatures? Ø How does the amount of austenite vary? Ø Which phase(s) dissolve/disappear when going from 600 °C to 1000 °C? 13. Calculate the phase diagram of Cr-Ni binary alloy (Temperature vs composition of Nickle) using Thermo-Calc. Ø Consider composition of Ni from 0 to 100% by wt and the temperature range between 1000 and 2000 K. Ø Designate the respective phases in each of the areas in the phase diagram. Ø Compare the phase diagram that you find from Thermo-calc with that from literature (you may search from internet using the key words ‘Cr-Ni binary alloy phase diagram’). Ø Calculate the fraction of phases as a function of temperature for a binary alloy containing 50 wt% of Cr. 14. Calculate and provide phase diagram of Fe-Cr binary alloy between T = 1000 and 2000K. Consider composition of Cr between 0 to 30 wt%. Ø Indicate the different phases on the phase diagram 15. Use single point equilibrium calculations for an alloy (with composition of elements provided in the Table 1) to determine the phase fraction of face centered cubic structure (FCC_A1) at 1600K.    Table 1: composition of alloy Element Fe Ni C %wt Bal 12 0.02 16. Use the Property model in Thermo-Calc to calculate and plot the increase in yield strength of Fe-C alloy as a function of concentration of carbon due to solution hardening. Consider the concentration of carbon increases from 0 to 1wt %. 17. WC-Co system is mainly used for cutting other materials due to its superior hardness and fracture resistance. It is produced by mixing powder particles of WC and 10wt% Co and sintering (heat treating) them at high temperatures. The process (sintering) requires microstructural phases consisting of FCC, MC_SHP and LIQUID phases co-existing together at equilibrium. Determine the window of temperature and carbon concentration in which you can produce WC-Co cutting tools. [Hint: For this problem, you may have to use the licensed version of TC available on GPU desktops that can be accessed through myUniApps]. 18. Al-Cu alloys are commonly used in the aerospace industry due to their high strength and can be precipitation harden-able but they suffer from casting issues. Si is often added to improve the castability. Calculate the phase diagram for Al-Cu alloy. Consider a temperature range of 200 to 1100C and vary copper composition from 0 to 60 wt%. Ø Identify the appropriate thermodynamic database Ø Indicate/label the different phase region Ø What is the maximum composition of Cu for austenite (FCC) Al-Cu alloy o At what temperature does this composition exist? Ø Using a grid based analysis, determine the maximum composition of Si that can be added to the Al-Cu alloy to retain the austenite region for precipitation hardening. Ø Add a “Property Model Calculator” into the tree connecting to the previous “System Definer” and “Plot Renderer” and determine the total yield strength based on precipitation hardening and overlay it on the grid based analysis. 19. Compare and contrast the stable as well as metastable phase diagrams of an Fe-C system. Consider composition of carbon varying from 0 to 0.3 wt % and temperature from 500 2000C (new) 20. Composites of Silica and Alumna are used in various industries, including high-temperature applications like aerospace and as a component in coatings for enhanced thermal shock resistance. Calculate the binary phase diagram of Silia and Alumna composites (SiO2 - Al2O3). Consider varying the mole fraction of Alumna (Al2O3) from 0 to 1 with temperature range of 1200 to 2500K (new). Ø Discuss and identify the relevant thermodynamic database. Ø Discuss how to incorporate the oxides as opposed to the metallic elements. Ø Plot phase diagram and label the different ceramic phases. Ø Compare the calculated phase diagram with that of an experimentally determined phase diagram. Find an experimentally determined phase diagram of SiO2 – Al2O3 in ‘Week 6 - Intro to phase diagrams - L1’. Discuss the difference. Ø Hint: Follow the TC tutorial >> https://www.youtube.com/watch?v=donmXmxsjvo&list=PLfv6McToaTGSzHrh3TfNF2EhoUUUqeEWd&index=13 

$25.00 View

[SOLVED] Role play/Simulation - Media Law Training Material Assessment 3

Role play/Simulation - Media Law Training Material (Assessment #3) 1. The word length for the assignment is 1300 Words 2. While preparing your training material, please pay attention to formatting and style. 3. Training materials should be engaging for the reader. 4. If you choose to produce a video or multiple videos, each minute of video is equivalent to 120 words. 5. Training material should assume that the reader has no previous training on the legal issues the training is focusing on. Please use a language that is accessible to adults who do not have prior training in media law. Minimize the use of technical or legal jargon.  6. Long texts would confuse and bore the readers. Rather than just relying on lengthy texts, when appropriate, use techniques like bullets, lists. To reduce the burden on the reader, use section headers to divide long texts into short, digestible units.  7. Use visual aids to increase the comprehensibility of the information. For example, when you are discussing procedures, you can use flow charts instead of summarizing the procedure with long sentences. Alternatively, you can use text boxes or thought bubbles to draw attention to a key point your readers should remember. 8. For this assessment, you can use any citation style. you use. If you want a style. that will minimize interruptions in the text, we recommend the use of a style. that relies on footnotes or endnotes (e.g., Chicago Manual of Style. 17th edition – full note, IEEE, Springer – Humanities). Assessment Criteria and Submission File Instructions · The criteria sheet is available on course Blackboard page.  · Your assignment should be uploaded to the Blackboard Assignment The file name should be YOUR FAMILY NAME, GIVEN NAME and the word LAW. All in capitals. For example: BARUHLEMILAW. · Your project should have a cover page containing your full name, your student number, and a title. The references should be formatted consistently in the style. that you choose for this assessment. · If you are producing a webpage, prepare your submission by printing all pages of the website to a single PDF and adding a cover page that has your full name, your student number, title and optionally the link and a QR code for the live webpage. Please make sure that your website is not accessible by the general public since the information it contains is not verified. · If you are producing a video or multiple videos, prepare your submission for Blackboard Assignment by producing a document containing the text (e.g., the script) of the information provided in the video. Your video will be needed to be embedded as an ECHOVideo project. You are working as a training coordinator of the Human Resources department at a media company. You can choose the type of media company (e.g., an online news source, social media platform, newspaper, an AI video company). The Chief Learning Officer has given you the task of developing training materials regarding relevant laws. These training materials should serve as a resource for training new staff members in understanding and complying with these laws and maintaining ethical standards. As a pilot for his training program, you need to develop the training materials for one of the areas listed below. Please choose a domain that is relevant to the company you are working for (e.g., whistle-blower protection for a news organisation). This may include the following; however, you may choose another relevant domain with the approval of the course coordinator: · Data privacy · Freedom of Information (FOI) · Whistle-blower Protection · Legal protection of journalists and their sources (including shield laws and contempt) · Defamation · Intellectual property You can choose to develop the training material for a jurisdiction you are interested in (e.g., GDPR in the EU; Defamation Law in China; FOI in Australia). Steps: 1. Pick the type of media company, law domain, and the jurisdiction you will work on.  2. Research: Research the relevant laws in the jurisdiction you have chosen. You can use the assigned readings as a reference; however, you will need to also familiarize yourself with recent case studies, examples, or controversies related to these laws in the media industry.  3. Training Materials Development: Determine the format of your training materials. You can produce a booklet, website, infographic, or video. You can also mix the formats. For example, you can create a website with short videos. If you want to design a booklet or an infographic, you can use publishing tools like Adobe InDesign, Canva, Word, Pages, Keynote, or PowerPoint. If you want to produce an interactive web resource, you can use web content management tools such as WordPress, Wix, or Weebly. If you want to produce videos, you can use video editing or animation tools like Adobe Premiere, Final Cut Pro, Adobe Animate. Keep in mind that this is not an essay; the training material you produce should be visually appealing. 4. Prepare Your Training: Produce the content of your training materials with the following sections: 1. Introduction: Provide an overview of the importance of the law area (e.g., privacy, free speech) for the media industry. Explain the potential consequences of non-compliance or organizations and their stakeholders. Please remember that the consequences you are describing should be specific to the law area you selected. For example, if you are focusing on copyright, the consequences you are describing should be about the consequences of failing to comply with copyright laws in the jurisdiction you are focusing on.  2. Relevant Laws: Summarize the relevant laws, including their key provisions and what they require from individuals and organizations. Illustrate the application of these laws with examples or scenarios specific to the media industry. Summarize relevant procedures that individuals and organizations should follow to comply with the law. For example, if you are describing the General Data Protection Regulation in the EU, a procedure you may need to describe would be what a company should do if a data subject requests to review the data you have about them.  3. Best Practice Guidelines: Develop a set of best practices and ethical guidelines that media professionals should follow to ensure compliance with the relevant laws you are describing. Again, keep in mind that the best practice and ethical guidelines should be specific to the legal area you are focusing on. For example, if you are focusing on defamation laws in Australia, you need to discuss what measures a journalist can take to avoid putting themselves at risk of defaming someone.  Important Notes: 1. The word length for the assignment is 1300 Words 2. While preparing your training material, please pay attention to formatting and style.  3. Training materials should be engaging for the reader.  4. If you choose to produce a video or multiple videos, each minute of video is equivalent to 120 words. 5. Training material should assume that the reader has no previous training on the legal issues the training is focusing on. Please use a language that is accessible to adults who do not have prior training in media law. Minimize the use of technical or legal jargon.  6. Long texts would confuse and bore the readers. Rather than just relying on lengthy texts, when appropriate, use techniques like bullets, lists. To reduce the burden on the reader, use section headers to divide long texts into short, digestible units.  7. Use visual aids to increase the comprehensibility of the information. For example, when you are discussing procedures, you can use flow charts instead of summarizing the procedure with long sentences. Alternatively, you can use text boxes or thought bubbles to draw attention to a key point your readers should remember. 8. For this assessment, you can use any citation style. you use. If you want a style. that will minimize interruptions in the text, we recommend the use of a style. that relies on footnotes or endnotes (e.g., Chicago Manual of Style. 17thedition – full note, IEEE, Springer – Humanities). Assessment Criteria and Submission File Instructions · The criteria sheet is available on course Blackboard page.  · Your assignment should be uploaded to the Blackboard Assignment The file name should be YOUR FAMILY NAME, GIVEN NAME and the word LAW. All in capitals. For example: BARUHLEMILAW. · Your project should have a cover page containing your full name, your student number, and a title. The references should be formatted consistently in the style. that you choose for this assessment.  · If you are producing a webpage, prepare your submission by printing all pages of the website to a single PDF and adding a cover page that has your full name, your student number, title and optionally the link and a QR code for the live webpage. Please make sure that your website is not accessible by the general public since the information it contains is not verified.  · If you are producing a video or multiple videos, prepare your submission for Blackboard Assignment by producing a document containing the text (e.g., the script) of the information provided in the video. Your video will be needed to be embedded as an ECHOVideo project.   

$25.00 View

[SOLVED] COMP9315 25T1 Assignment 2 Multi-attribute Linear Hashed Files

COMP9315 25T1: Assignment 2 Multi-attribute Linear Hashed Files Aims This assignment aims to give you an understanding of • how database files are structured and accessed. • how multi-attribute hashing is implemented. • how linear hashing is implemented. The goal is to build a simple implementation of a linear-hashed file structure that uses multi-attribute hashing. Summary Deadline: Friday 20:59:59 25th April (Sydney Time). Late Penalty: 5% of the max assessment mark per-day reduction, for up to 5 days. Marks: This assignment contributes 20 marks toward your total mark for this course. Submission: Moodle > Assignment > Assignment 2> upload ass2_ zID.zip. The ass2_ zID.zip file must contain your Makefile plus all of your *.c and *.h files. Details on how to build the ass2_ zID.zip file are given below. Note: Make sure that you read this assignment specification carefully and completely before starting work on the assignment. Questions which indicate that you haven't done this will simply get the response "Please read the spec". This assignment does not require you to do anything with PostgreSQL. Introduction Linear hashed files and multi-attribute hashing are two techniques that can be used together to produce hashed files that grow as needed and which allow all attributes to contribute to the hash value of each tuple. See the course notes and lecture slides for further details on linear hashed files and multi-attribute hashing. In our context, multi-attribute linear-hashed (MALH) files are file structures that represent one relational table, and can be manipulated by three commands: ❖ Create command Creates MALH files by accepting four command line arguments: • the name of the relation • the number of attributes • the initial number of data pages (rounded up to nearest 2n) • the multi-attribute hashing choice vector This gives you storage for one relation/table, and is analogous to making an SQL data definition like: create table R ( a1 text, a2 text, ... an text ); Note that, internally, attributes are indexed 0..n-1 rather than 1..n. The following example of using create makes a table called abc with 4 attributes and 8 initial data pages: $ ./create abc 4 6 "0,0:0,1:1,0:1,1:2,0:3,0" Note that 6 will be rounded up to the nearest 2n (i.e. to 8). If we'd written 8, we would have gotten the same result. The choice vector (fourth argument above) indicates that • bit 0 from attribute 0 produces bit 0 of the MA hash value • bit 1 from attribute 0 produces bit 1 of the MA hash value • bit 0 from attribute 1 produces bit 2 of the MA hash value • bit 1 from attribute 1 produces bit 3 of the MA hash value • bit 0 from attribute 2 produces bit 4 of the MA hash value • bit 0 from attribute 3 produces bit 5 of the MA hash value The following diagram illustrates this scenario: The above choice vector only specifies 6 bits of the combined hash, but combined hashes contain 32 bits. The remaining 26 entries in the choice vector are automatically generated by cycling through the attributes and taking bits from the high-order hash bits from each of those attributes. ❖ Insert command Reads tuples, one per line, from standard input and inserts them into the relation specified on the command line. Tuples all take the form. v1,v2,..., vn. The values can be any sequence of characters except ',', '%' and '?'. The bucket where the tuple is placed is determined by the appropriate number of bits of the combined hash value. If the relation has 2d data pages, then d bits are used. If the specified data page is full, then the tuple is inserted into an overflow page of that data page. ❖ Query command The query command allows you to run selection and projection queries over a given relation. It supports wildcard and pattern matching, finds all tuples in either the data pages or overflow pages that match the query, as well as flexible attribute projection without distinct. The general usage is: $ ./query [-v] 'a1,a2,...' from RelName where 'v1,v2,...' • 'a1,a2,...' (or '*'): a sequence of 1-based attribute indexes used for projection, can be '*' to indicate all attributes. The minimal ‘a’ value is ‘0’. • 'v1,v2,...': a sequence of attribute values used for selection. Note that: The projection and selection strings are wrapped in quotes to prevent the shell from misinterpreting characters as wildcards or splitting the values on commas, these quotes are handled automatically by the shell. Your code does not need to perform. any extra parsing or stripping of quotes. Each value vi in the selection tuple can be: • Literal value: A specific value that must match exactly in the corresponding attribute position. (e.g., 'abc' matches 'abc', '10' matches '10') • Single question mark '?': Matches any literal value in the corresponding attribute position. (e.g., '?' matches 'abc', '?' matches '10') • Pattern string containing '%': A string that includes one or more '%', where each '%' matches zero or more characters. Enables flexible pattern-based matching. (e.g., 'ab%' matches any literal value starting with 'ab', such as 'abc', 'ab123') Some example query commands, and their interpretation are given below. $ ./query '*' from R where '?,?,?' # matches any tuple in the relation R $ ./query '3,1' from R where '10,?,?' # projects attributes 1 and 3 (in order) from tuples where the value of attribute 0 is 10 $ ./query '*' from R where '?,%ab%,?' # matches any tuple where attribute 1 contains 'ab' A MALH relation R is represented by three physical files: • R.info containing global information such as o a count of the number of attributes o the depth of main data file (d for linear hashing) o the page index of the split pointer (sp for linear hashing) o a count of the number of main data pages o the total number of tuples (in both data and overflow pages) o the choice vector (cv for multi-attribute hashing) • R.data containing data pages, where each data page contains o offset of start of free space o overflow page index (or NO_PAGE if none) o a count of the number of tuples in that page o the tuples (as comma-separated C strings) • R.ovflow containing overflow pages, which have the same structure as data pages When a MALH relation is first created, it is set to contain a 2n pages, with depth d=n and split pointer sp=0. The overflow file is initially empty. The following diagram shows an MALH file R with initial state with n=2. After 294 tuples have been inserted, the file might have the following state (depending on field value distributions, tuple sizes, etc): Pages in MALH files have the following structure: a header with three unsigned integers, strings for all of the tuple data, free space containing no tuple data. The following diagram gives an example of this: We have developed some infrastructure for you to use in implementing multi-attribute linear-hashed (MALH) files. You may use this infrastructure or replace parts of it (or all of it) with your own, but your MALH files implementation must conform. to the conventions used in our code. In particular, you should PRESERVE ALL EXISTING INTERFACES to the supplied modules (e.g. Reln, Page, Selection, Projection, Tuple). DO NOT MODIFY OR DELETE any existing interfaces-you may only add new ones in exceptional cases. Ensure that your submitted ADTs work with the supplied code in the create, insert, and query commands. Setting Up You should make a working directory for this assignment and put the supplied code there. Read the supplied code to make sure that you understand all of the data types and operations used in the system. $ mkdir Your/ass2/Directory $ cd Your/ass2/Directory $ unzip /web/cs9315/25T1/assignments/ass2/ass2.zip You should see the following files in the directory: • create.c   ... a main program that creates a new MALH relation • dump.c   ... a main program that lists all tuples in an MALH relation • insert.c   ... a main program that reads tuples and insert them • query.c   ... a main program that finds tuples matching a PMR query and projects them onto specified attributes • stats.c   ... a main program that prints info about an MAH relation • gendata.c   ... a main program to generate random tuples • bits.h, bits.c   ... an ADT for bit-strings • chvec.h, chvec.c   ... an ADT for choice vectors • defs.h   … defines global constants and types • hash.h, hash.c   ... the PostgreSQL hash function • page.h, page.c   ... an ADT for data/overflow pages • select.h, select.c   ... an ADT for selection scanners (incomplete) • project.h, project.c   ... an ADT for projection operators (incomplete) • reln.h, reln.c   ... an ADT for relations (partly complete) • tuple.h, tuple.c   ... an ADT for tuples (partly complete) • util.h, util.c   ... utility functions This gives you a partial implementation of MALH files; you need to complete the code so that it provides the functionality described below. The supplied code actually produces executables that work somewhat, but are missing a working query scanner implementation (from select.c), a proper MA hash function (from tuple.c), and splitting and data file increase (from reln.c). Effectively, they give a static hash file structure with overflows. To build the executables from the supplied code, do the following: $ make gcc -Wall -Werror -g -std=c99 -c -o create.o create.c … gcc gendata.o select.o project.o page.o reln.o tuple.o util.o chvec.o hash.o bits.o /usr/lib/x86_64-linux-gnu/libm.so -o gendata This should not produce any errors on the CSE servers. Once you have the executables, you could build a sample database as follows: $ ./create R 3 4 "0,0:0,1:0,2:1,0:1,1:2,0" cv[0] is (0,0) cv[1] is (0,1) cv[2] is (0,2) ... cv[31] is (1,23) This command creates a new table called R with 3 attributes. It will be stored in files called R.info, R.data and R.ovflow. The data file initially has 4 pages (so depth d=2). The overflow file is initially empty. The lower-order 6 bits of the choice vector are given on the command line; the remaining bits are auto-generated. Given the file size (4 pages), only two of the hash bits are actually needed. You could check the status of the files for table R via the stats command: $ ./stats R Global Info: #attrs:3 #pages:4 #tuples:0 d:2 sp:0 Choice vector 0,0:0,1:0,2:1,0:1,1:2,0:0,31:1,31:2,31:0,30:1,30:2,30:0,29:1,29:2,29:0,28:1,28:2,28:0,27:1,27:2,27:0,26:1,26:2,26:0,25:1,25:2,25:0,24:1,24:2,24:0,23:1,23 Bucket Info: #Info on pages in bucket (pageID,#tuples,freebytes,ovflow) [ 0] (d0,0,1012,-1) [ 1] (d1,0,1012,-1) [ 2] (d2,0,1012,-1) [ 3] (d3,0,1012,-1) Since the file is size 2d, the split pointer sp = 0. The rest of the global information should be self explanatory, as should the choice vector. The bucket info shows a quadruple for each page; since there are no overflow pages (yet), only data pages appear. The pageID value in each quad consists of the character 'd' (indicating a data file), plus the page index. Each page is 1024 bytes long, which includes a small header, plus 1012 bytes of free space for tuples. There are currently zero tuples in any of the pages. The overflow page IDs are all -1 (for NO_PAGE) to indicate that no data page has an overflow page. You can insert data into the table using the insert command. This command reads tuple from its standard input and inserts them into the named table. For example, the command below inserts a single tuple into the R MALH files: $ echo "100,abc,xyz" | ./insert R hash(100) = 00011100 00101000 10100111 11101100 The insert command prints the hash value for the tuple (based on just the first attribute), and then inserts it into the file. Since the table is currently empty, this tuple will be inserted into page 0. Why page 0? You should be able to answer this by knowing the depth and the hash value. If you then check with the stats command you will see that there is a single tuple in the files, and it's in page 0. Typing many individual tuples is tedious, so we have provided a command, gendata, which can generate tuples appropriate for a given table. It takes four comand line arguments, only two of which are compulsory: the number of tuples to generate, and the number of attributes in each tuple. a sample usage: $ ./gendata 5 3 1,triangle,pith 2,comet,signature 3,aeroplane,mum 4,dog,win 5,finger,desk This generates five tuples, each with three attributes. The first attribute is a unique ID value; the other attributes are random words. You can modify the starting ID value and the seed for the random number generator from the command line. You could use gendata to generate large numbers of tuples, and insert them as follows: $ ./gendata 250 3 101 | ./insert R hash(101) = 11110100 01100100 11010000 00110000 hash(102) = 00100101 10100110 10100001 11100100 ... hash(349) = 01101101 01100101 00011111 10100111 hash(350) = 10011011 01100101 01111001 11001000 This will insert 250 tuples into the table, with ID values starting at 101. You can check the final state of the database using the stats command. It should look something like: $ ./stats R Global Info: #attrs:3 #pages:4 #tuples:251 d:2 sp:0 Choice vector 0,0:0,1:0,2:1,0:1,1:2,0:0,31:1,31:2,31:0,30:1,30:2,30:0,29:1,29:2,29:0,28:1,28:2,28:0,27:1,27:2,27:0,26:1,26:2,26:0,25:1,25:2,25:0,24:1,24:2,24:0,23:1,23 Bucket Info: #Info on pages in bucket (pageID,#tuples,freebytes,ovflow) [ 0] (d0,56,4,0) -> (ov0,15,737,-1) [ 1] (d1,57,2,3) -> (ov3,2,981,-1) [ 2] (d2,59,1,2) -> (ov2,2,976,-1) [ 3] (d3,54,7,1) -> (ov1,6,905,-1) This shows that each data page has one overflow page, and that each data page has roughly the same number of tuples. The bucket starting at data page 0 has a few more tuples than th other buckets, because it has more tuples (15) in the overflow page. Note that page IDs in the overflow pages are distinguished by starting with "ov". Note also that e.g. the data page at position 3 in the data file has an overflow page at position 1 in the overflow file; this is because page 3 filled up before pages 1 and 2. One other thing to notice here is that the file has not expanded. It still has the 4 original data pages. Even if you added thousands of tuples, it would still have only 4 data pages. This is because linear hashing is not yet implemented. Implementing it is one of your tasks. You could then use the query command to search for tuples using a command like: $ ./query '2,1' from R where '101,?,?' This aims to find any tuple with 101 as the ID value (the first attribute), and projects the result on attributes 2 and 1 (in that order); there will be exactly one such tuple, since ID values are unique. This returns no solutions because query scanning is not yet implemented. Implementing it is another of your tasks. Task 1: Multi-attribute Hashing The current hash function does not use the choice vector to produce a combined hash value. It simply uses the hash value of the first attribute (the ID value) to generate a hash for the tuple. Your first task is to modify the tupleHash() function to use the relevant bits from each attribute hash value to form. a composite hash. The choice vector determines the "relevant" bits. You can find more details on how a multi-attribute hash value is produced in the lecture slides and notes. Task 2: Querying (Selection and Projection) The selection (scan) data type is defined in select.c and select.h, and the projection data type is defined in project.c and project.h. These data types are used exclusively within query.c. Currently, both data types are incomplete. Your task is to design appropriate data structures for selection and projection, and implement the necessary operations on them. Specifically, in select.c, you are required to implement n-dimensional partial-match retrieval (n-d pmr).This includes support for pattern matching, which can be implemented directly in select.c or by calling a function defined in .c files of other data types. In project.c, your task is to implement projection without distinct, which involves selecting and possibly reordering attributes from tuples. The functions currently provided in select.c and project.c contain rough approximations to the algorithms you will need to build; you can find more details in the lecture slides and course notes. Most of the helper functions you'll need are in other data types, but you can add any others that you find necessary. Task 3: Linear Hashing As noted above, the current implementation is essentially a static version of single-attribute hashing. You need to add functionality to ensure that the file expands after every c insertions, where c is the page capcity c = floor(B/R) ≈ 1024/(10*n) where n is the number of attributes. Add one page at the end of the file and distribute the tuples in the "buddy" page (at index 2d less) between the old and new pages. Determine where each tuple goes by considering d+1 bits of the hash value. This will involve modifying the addToRelation() function, and will most likely require you to add new functions into the reln.c file (and maybe other files). You can simplify the standard version of linear hashing by not removing overflow pages from the overflow chain of the data page they are attached to. This may result in some data pages having multiple empty overflow pages; this is ok if they are eventually used to hold more tuples. The following diagram shows an example of what might occur during a page split: How we Test your Submission You need to submit a single zip file containing all of the code files that are needed to build the create, dump, insert, query and stats commands. Note that we will use the original versions of create.c, dump.c, insert.c, query.c, stats.c, and gendata.c for testing your code. This means that any functions you write must use the same interface as defined in the ADT *.h files. DO NOT MODIFY OR DELETE any existing interfaces in the ADTs. When you want to submit your work, make sure to follow the steps below. Failing to do so may result in your code not appearing in the correct directory during testing, and it will fail to compile: $ cd Your/ass2/Directory $ zip ass2_zID.zip Makefile bits.h chvec.h defs.h hash.h page.h select.h project.h reln.h tuple.h util.h bits.c chvec.c hash.c page.c select.c project.c reln.c tuple.c util.c Once you have generated the ass2_zID.zip file, you can submit it via Moodle. We will compile your submission for testing as follows: $ unzip YourAss2.zip $ tar xf OurMainPrograms.tar # extracts our copies of ... # create.c dump.c insert.c query.c stats.c $ make # should produce executables ... # create dump insert query stats We will then run a range of tests to check that your program meets the requirements given above. Since we are using the original create.c, etc., your code must work with them. The easiest way to ensure this is to not change these files while you're working on the assignment. Assignment Submission Submission • You need to submit a single zip file containing all of the code files that are needed to build the create, dump, insert, query and stats commands via Moodle. o Noted, we will use the original versions of create.c, dump.c, insert.c, query.c, stats.c, and gendata.c for testing your code. This means that any functions you write must use the same interface as defined in the ADT *.h files. DO NOT MODIFY OR DELETE any existing interfaces in the ADTs. For more details, please refer to the Section ‘How we Test your Submission’. • Please name your ZIP file in the following format to submit: ass2_ zID.zip (e.g., ass2_ z5000000.zip). Note: 1. If you have problems relating to your submission, please email to xingyu.tan@un sw.edu.au. 2. If there are issues with Moodle, send your assignment to the above email with the subject title “ COMP9315 Ass2 Submission”. Late Submission Penalty • 5% of the max mark (20 marks) will be deducted for each additional day. • Submissions that are more than five days late will not be marked. Plagiarism The work you submit must be your own work. Submission of work partially or completely derived from any other person or jointly written with any other person is not permitted. The penalties for such an offence may include negative marks, automatic failure of the course and possibly other academic discipline. All submissions will be checked for plagiarism. The university regards plagiarism as a form. of academic misconduct and has very strict rules. Not knowing the rules will not be considered a valid excuse when you are caught. • For UNSW policies, penalties, and information to help avoid plagiarism, please see: https://student.unsw.edu.au/plagiarism. • For guidelines in the online ELISE tutorials for all new UNSW students: https://subjectguides.library.unsw.edu.au/elise/plagiarism.

$25.00 View

[SOLVED] AMATH 483 / 583 Roche - Homework Set 8

AMATH 483 / 583 (Roche) - Homework Set 8 Due Friday June 6, 5pm PT May 30, 2025 Homework 8 (80 points) 1. (+20) Fourier transforms. Evaluate the Fourier transform. of the following functions by hand. Use the definitions I provided (includes this is common in physics but also now the default used in WolframAlpha - a powerful math AI tool) as well as the definition for Dirac delta I used if needed. (a) f(x) = (b) f(t) = sin(ω0t) , ω0 constant (c) f(x) = e−a|x| and a > 0 (d) (distribution) f(t) = δ(t) 2. (+10) Correlation. By definition, correlation is and measures how similar one signal or data function is to another. Let p() = hpi + δp() and q() = hqi + δq(), where and δ() denote the mean values and fluctuation functions (deviations about the mean). Two functions are defined to be uncorrelated when Evaluate of the following functions: 3. (+10) Autocorrelation. Aside, periodic functions exhibit pronounced autocorrelations as shifting such functions by their period puts the function directly on itself. Alternatively, random functions or noise is characterized as being uncorrelated. Evaluate the autocorrelation of the following function: 4. (+20) Fourier transform. di↵usion equation solve. Consider the di↵usion equation where T(x, t) describes the temperature profile of a long metal rod. (a) Assume you know T(x, 0) and define the Fourier transform. of T(x, t) to be (k, t). Transform. the original equation and initial conditions into k-space. Solve the resulting equation. Inverse transform. the result to obtain the solution in terms of the original variables. (b) Find the temperature in the rod given initial conditions and 5. (+20) Compare FFTW to CUFFT on HYAK. Measure and plot the performance of calculating the gradient of a 3D double complex plane wave defined on cubic lattices of dimension n3 from 163 to n = 2563, stride n⇤ = 2 for both the FFTW and CUDA FFT (CUFFT) implementations on HYAK. Let each n be measured ntrial times and plot the average performance for each case versus n, ntrial ≥ 3. Submit your performance plot and C++ test code. Your plot will have ’flops’ on the y-axis (or some appropriate unit of FLOPs) and the dimension of the cubic lattices (n) on the x-axis. You will need to estimate the operation count of computing the derivative using FFT on a lattice.

$25.00 View

[SOLVED] PSTAT 131 Final Project

PSTAT 131 Final Project Submission Contents You should submit a .zip file that contains the following: -     data set(s) of choice should work for a machine learning project - an .Rmd (R Markdown) file containing your project, in the form. of a written report - the knitted .html or .pdf file containing your project - any .R files (R scripts) containing work on your project. The degree of organization of these can vary, but should at least have meaningful file titles, like "eda. R" or "missing_data_analyses. R," etc. - any raw data files. Exceptions can be made. For instance, if your data files are huge in terms of megabytes, you don't have to submit them. If your data is proprietary or confidential, you don't have to submit it. - a code book. This should take the form of a document (either .doc, .html, .pdf, or .txt) that, at minimum, identifies and defi nes each column in your fi nal data set. If a variable takes on different values (for example, 1 = "single," 2 = "married," etc.), those values should be defi ned in the code book. If the .zip file is too large to submit via Canvas, you may submit it to the instructor (me) personally via email, either as an attachment or via Google Drive, etc. Report Contents Your fi nal project report should be written similarly to a paper, with figures, code, and results included throughout to illustrate your points and findings. Text should be included to guide the reader. I recommend reading through the example projects to get an idea of this layout, and referencing the project rubric for more information. Specifically, your report must contain: - An introduction section: Describes the data, the research questions, provides any background readers need to understand your project, etc. - A conclusion section: Discusses the outcome(s) of models you fit. Which models performed well, which performed poorly? Were you surprised by model performance? Next steps? General conclusions? - A table of contents - A section for exploratory data analysis: This should contain at least 3 to 5 visualizations and/or tables and their interpretation/discussion. At minimum you should create a univariate visualization of the outcome(s), a bi-variate or multivariate visualization of the relationship(s) between the outcome and select predictors, etc. Part of an EDA involves asking questions about your data and exploring your data to fi nd the answers. - A section discussing data splitting and cross-validation: Describe your process of splitting data into training, test, and/or validation sets. Describe the process of cross-validation. Remember to write for a general audience. Act as if your project will be read by people new to machine learning. - A section discussing model fitting: Describe the types of models you fit, their parameter values, and the results. - Model selection and performance: A table and/or graph describing the performance of your best-fitting model on testing data. Describe your best-fitting model however you choose, and the quality of its predictions, etc.

$25.00 View

[SOLVED] CSE416 Introduction to Machine Learning

CSE416 Introduction to Machine Learning Course Information¶ Welcome to the course! STAT 416 and CSE 416 are run as a joint course. If you are registered for STAT 416, that code will show up on your transcript. and be counted toward your degree as STAT credit. If you are registered for CSE 416, that code will show up on your transcript. and be counted toward your degree as CSE credit. Teaching Staff Instructors: Dan Kowalczyk Instructor Contact: Please contact on Ed Discussion Course TAs: Course Staff TA Contact: Please contact on Ed Discussion Registration Questions: CSE Advisors ([email protected]) Other Info · Prerequisite: o Programming: CSE 123, CSE 143, CSE 160, or CSE 163. o Statistics: STAT 311, STAT 390, STAT 391, IND E 315, or Q SCI 381. · Course Website: Here! (https://courses.cs.washington.edu/courses/cse416/25sp/ or https://cs.uw.edu/416) · Textbook: None · Feedback: You can submit (anonymous) feedback for the class here. Goals¶ Machine learning (ML) touches all aspects of our lives. It informs financial decisions, policy decisions, and hiring decisions. It is used in cutting-edge scientific research. It enables us to communicate across lingual borders, to receive tailored news feeds, and to apply puppy filters to our faces. Yet, machine learning’s influence has been far from uniformly positive, with documented cases of discrimination against gender minorities, racial minorities, and low-income people, with devastating real-world consequences. It is crucial for members of our modern society to be able to understand and shape the machine learning systems that are having such a powerful impact on our lives. To that end, this course is designed to provide a thorough grounding in the methodologies, technologies, and algorithms of machine learning, as well as to provide frameworks to think about the positive and negative social impacts of machine learning systems. The topics of the course draw from classical statistics, from machine learning, from data mining, from statistical algorithms, and from science and technology studies. The course is broken up into five overarching case studies (order might change): 1. Regression 2. Classification 3. Deep Learning 4. Clustering and Similarity 5. Recommender Systems Students entering the class should have a pre-existing working knowledge of probability, statistics and algorithms, although the class has been designed to allow students with a strong numerate background to catch up and fully participate. Students should also have a pre-existing working familiarity with computer programming. Course Rigor¶ There are many places to learn about machine learning online and at the University of Washington. CSE/STAT 416 is intended for the broadest audience of students. We want to make sure everyone can leave this class with a strong foundational understanding of machine learning techniques and concepts. Our guiding philosophy for this course now and when we were originally designing it is: Everyone should be able to learn machine learning, so our job is to make tough concepts intuitive and applicable. In practice, this means: · We minimize pre-requisite knowledge as much as possible. Students enrolled in this course should not be scared of seeing something they haven’t seen before and we encourage an environment where students grow as learners. · We focus on important ideas and sometimes skip derivations or proofs to avoid getting bogged down. This does not mean proofs and derivations are not important (they are!), but we just don’t necessarily have the capacity to tackle them in this course. Alternate courses like CSE 446/546 or STAT 435 dive much deeper into the mathematical basis of machine learning. · We focus on the ability to apply theory to practice to best help students use these important concepts, or know when not to use them. If you are a student that wants a much deeper course in machine learning, that’s great! We recommend keeping up with the optional readings we post, or consider taking CSE 446/546 or STAT 435. Inclusion¶ All students are welcome in CSE/STAT 416 and are entitled to be treated respectfully by both classmates and the course staff. We strive to create a challenging but inclusive environment that is conducive to learning for all students. If at any time you feel that you are not experiencing an inclusive environment, or you are made to feel uncomfortable, disrespected, or excluded, please report the incident so that we may address the issue and maintain a supportive and inclusive learning environment. You may contact the course staff or the CSE academic advisors to express your concerns. Should you feel uncomfortable bringing up an issue with a staff member directly, you may also consider sending anonymous feedback or contacting the UW Office of the Ombud. Class Sessions¶ The class is primarily in-person this quarter, with some preparation work being completed before coming to class. Class time will be a mix of time for lecture and time for students to work on activities. Time in class spent on students actively participating in their learning has been shown numerous times to improve learning outcomes for students. To ensure there is time in class for these opportunities for active practice, we may also ask you to prepare to come to class each day by watching a pre-lecture video or reading a pre-lecture reading each day of class. The video or reading should take about 30 minutes to complete before class and the class session will begin where the pre-lecture content left off. Lecture attendance is not recorded, but it is expected that you attend class in order to stay on top of the course. Recordings of class will be available of the live lectures, but it is encouraged that you attend the live session if you are able so that you can 1) benefit from active participation with with your peers and 2) ask questions during class. After every lecture there will be a checkpoint, in the form. of an EdStem quiz, to help you test your retention of lecture concepts and identify areas you might need to study more. We recommend the following workflow to help you get the most out of your time with us during the quarter. For each day of class: · Complete the Pre-Class Content: On days where there is pre-class content, watch the videos and/or do the required readings for the day. Learning is a process of trial and error! Write down your though process and if the explanation provided in the video/reading reveals something you didn’t think of originally, make sure to write that down! o You should definitely be taking notes to refer to later! The videos will be broken up into smaller portions, so use the time before starting the next video to make sure your notes on the last section are complete. o Take pauses to pause and reflect on what you’re learning so far. How does this new concept relate to a past one? What doesn’t quite make sense about this yet? · Attend the Class Session: Come to class prepared with your notes from the pre-class content and what you found hard to understand or what you want to learn more about. We will start with a brief overview of the pre-lecture content and then dive in to the material for the day. There will be periodic time to answer questions that come up in class and time to work in groups on interactive exercises. · Reflect: Now that you got some practice with the material for the day, it’s time to reflect on what you learned and you how you felt the day went. Write down a closing section of your notes to summarize what you learned and leave notes to yourself about what you might need to study more. o In a few sentences, describe in your own words what you learned learned today. o Why did we learn this concept? o How does this concept relate to what we’ve learned previously? o What parts did you find tricky? Are there things you feel like you still need to work on more before you master the concept? · Checkpoint: Complete the checkpoint to test your retention of lecture concepts. Checkpoints are due before the next lecture. We recommend you complete it the day after lecture, so the concepts have had time to sink in. You’ll find that with a topic like machine learning, which has many interconnected concepts, having a good set of notes to work from as a knowledge base is very important. Your goal is to try to build up a mental model for: (1) which techniques or ideas are relevant to a particular situation; (2) how one idea compares/contrasts with another; and (3) recalling terms and definitions from class. Trying to write these down as you go is an important step in the learning process, even though it takes extra time while you are taking notes. We’ll come back to this idea later in the syllabus, but you’ll find a good set of notes made during the week will be quite helpful when it comes time to do the weekly Learning Reflections. Quiz Sections¶ There are also quiz sections on Thursdays that will operate to provide structured practice. Sections will differ from our M/W class sessions in that they are smaller and run by the teaching assistants (TAs). Sections will focus on tying together concepts from the various days and giving you programming experience that will help prepare you for your weekly homework assignments! Attendance in the class sessions and discussion sections are not recorded or used as part of your grade. However, it is expected that you are showing up and participating as much as you can to maximize your learning! Sections are held in-person. You’re welcome to attend any section, even if you’re not officially registered to attend that one. Required Coursework¶ There will be four categories of course work you will do in this class: Checkpoints (every lecture) Due Dates: Before the next lecture To go along with each lecture, there will be a “Checkpoint” for you to take on EdStem. Each checkpoint will consist of a few questions that help you test your understanding of concepts covered in the pre-lecture content and lecture. The questions are intended to be straightforward to answer and are provided to help you better assess your learning through the course. Each checkpoint should not require more than 20-30 minutes. Checkpoint questions come in two forms: multiple choices (selecting one choice or many choices) and numerical answers. You have unlimited attempts on these questions. Once you get a question right, you should be informed that the answer is correct along with an explanation. Checkpoints are graded on both completion and correctness; to earn the points for a checkpoint, you need to successfully complete all sections (called a “slide”) of the checkpoint before the due date. Solutions will be available after the deadline. Completion On EdStem, you will see a green checkmark next to each slide you complete. If you complete all of the slides in a checkpoint, you will see a green checkmark on the main page of lessons in Ed. This green checkmark on the lesson itself (not the individual slides) means you have earned an E for the entire checkpoint. See Ed for more information! Checkpoints will be due before the next class. So for a class on Wednesday, the checkpoint for that day is due the following Monday. Checkpoints can be submitted up to 7 days late after the day they were due for 50% credit. You will submit Checkpoints on EdStem in the “Lessons” tab under that day’s lesson. Homeworks (weekly) Due Dates: 10:00 pm every Thuresday, completed individually. Longer programming and conceptual assignments that will assess your mastery of the skills and concepts covered in class. Homework assignments primarily cover topics from the lectures that week (e.g., the homework released on week 4 will cover the lecture content from Mon and Wed of week 4). Each homework is generally divided into two parts: · Programming: This part involves writing Python code for specific programming problems that apply various machine learning topics learnt in class. You will write your code locally on EdStem (for most assignments) or Google Colaboratory. You will submit your code on EdStem (most assignments) or Gradescope, where there will be an autograder to ensure your code’s full accuracy. Autograder score is visible at the time of submission, and the autograded score on your last submission is final. For programming assignments that are autograded (most assignments), we primarily care about the behavior. of your program so there’s no need to worry about code quality (although goode quality code is easier to write, read, and debug when bugs arise). However, we won’t award extra points for effort if your code fails, so please make sure to debug them. Please do not hard-code your variables, as we will use different datasets for the unit tests. · · Conceptual: Conceptual questions check your understanding of the core machine learning concepts taught in the course. There will be two types of questions: (1) multiple choice, which are similar to checkpoint questions; and (2) free response, which will require you to defend your answers and show detailed work to ensure you have a thorough understanding of class concepts. The conceptual component of the homework is hosted on Gradescope, and you have unlimited attempts before the deadline. Unlike the checkpoint questions, you won’t know the autograded score until after the deadline. · In summary, all conceptual assignments must be submitted on Gradescope for credit, and unless otherwise stated programming assignments must be submitted on EdStem for credit. Learning Reflections (weekly) Deadline: 10pm every Sunday To aid your process of learning in this course, we will ask you to build up your own reference sheet for each week so that you can better look back at what you learned each week. More details on what you should turn in can be found here, but it is meant to be a low-stakes form. of staying on top of the course. What we ask you to turn in is rather narrow in scope in the scope of your learning, so you are encouraged to keep other notes or reference materials outside of what we ask you to turn in! You will submit learning reflections on Gradescope. Exams (two) There will be two exams in the course, with slightly different sets of logistics. · Midterm: A take-home midterm (online) · Final: An in-person final (paper) during our final exam slot (TBD) A make-up final exam will only be given in case of a serious emergency. If you miss an exam, even if you are sick or injured, you must contact the instructor BEFORE the exam (or arrange for someone to do so). No make-ups will be granted for personal reasons such a travel or conflicting schedules. No accommodations will be made for students who arrive late to exams. The only accommodations for exams that will be made are those that correspond to the University’s official accessibility guidelines, which must be reflected in your student account. Getting Help from Staff & Peers¶ There are two primary ways to get help from course staff: office hours and the EdStem discussion board. Office Hours are scheduled times where you can meet with members of the course staff synchronously to discuss course concepts, get assistance with specific parts of the assignments, discuss computer science and/or life outside of it, and work with peers and course staff in a small group setting. The EdStem discussion board is a way to asynchronously get answers to your questions. You can submit questions anonymously or privately to the course staff if you prefer. The course staff try to check the discussion board frequently during “business hours” (10 am - 9 pm) and will aim to get a response time to posts within those times of about 3 hours. Students are encouraged to respond to each others’ posts as well! Finally, you will make a lot of academic progress by making friends and interacting with other peers. We strongly encourage you to form. study groups by exchanging contact information with your peers and working together to review course concepts. Late Work¶ Homeworks Each student receives six late days for the entire quarter. You may use up to 2 late days on each homework except the last one, and each late day allows you to submit up to 24 hours late without penalty. Once a student has used up all their late days, each successive day that an assignment is late will result in a loss of 10% on that assignment. The deduction will not be immediately applied, but will be reflected in the final grades you see on Canvas by the end of the quarter. It is your responsibility to track how many late days you have used in the quarter. You do not need to contact the course staff if you want to use a late day; our tools are set up to allow you to turn in late by your choice. There is a small grace period for last-minute submission issues, but you should plan ahead to avoid depending on it. No assignment may be submitted more than 2 days (48 hours) late without permission from the course instructor. If unusual circumstances truly beyond your control prevent you from submitting an assignment, you should discuss this with the course staff as soon as possible. If you contact us well in advance of the deadline, we may be able to show more flexibility in some cases. Checkpoints You cannot use late days on checkpoints. However you can turn them in up to a week after their due date for 50% credit. Learning Reflections You cannot use late days on learning reflections. However you can turn them in up to a week after their due date for 50% credit. Exams You cannot use late days on exams. We do not accept late submissions. Note on Extenuating Circumstances: If you have extenuating circumstances for an assignment, please contact the instructor as soon as possible to discuss accommodations. See the section below on Extenuating Circumstances. Grades¶ Grade Breakdown¶ Your percentage grade in this course will be weighted using these categories: Category Total Num Weight Homeworks - Programming 8 35% Homeworks - Conceptual 8 20% Checkpoints 19 5% Learning Reflections 10 10% Exam - Midterm 1 10% Exam - Final 1 20% Total 100% We will also drop your lowest 2 Checkpoints and 1 Learning Reflection in your final grade computation. There may be small extra credit opportunities as well, but these will not make a major impact in course grades. The extra credit can affect your grade by potentially pushing you up to the next grade point if you are very close (e.g. 3.0 to 3.1). They are meant to be fun extensions rather than required parts of the course. Our advice is to complete extra credit for your own learning or review, but it is unlikely to be an efficient use of your time if you are completing it solely to boost your grade. Course Grades¶ A very common question students ask is: “Is this class curved?” Curving is generally seen as process of assigning course grades so that there is a fixed, pre-determined mean or median (although there are many different things people can mean when something is curved). We do not curve in this course. Instead, we will assign course grades using a bucket system: if you earn at least the percentage specified in the left column, your course grade will be at least the grade listed on the right. These are minimum guarantees: your course grade could be higher than what this table suggests. Do note, we do not make any guarantees of the course grades within these buckets. Percent Earned Course Grade 100 4.0 90 3.5 80 3.0 70 2.5 60 2.0 50 0.7 Do note that these are only minimum guarantees and do not make any guarantees on how high your grade can be. Missing the requirement to get a particular grade does not make it impossible to earn that grade; we just can’t give you a promise that you will have it. In other words, it is still possible to get a 3.5 even if your percentage is less than 90%, we just can’t make you a guarantee that will happen. Academic Honesty and Collaboration Policies¶ Philosophy¶ Learning is a collaborative process, and everyone benefits from working with others when learning new concepts and skills. In general, we encourage you to collaborate with your classmates in your learning and take advantage of each others’ experience, understanding, and perspectives. However, there is a difference between learning collaboratively and completing work for someone else. This can be a subtle but important distinction. Ultimately, the goal of the course is to ensure that every student masters the material and develops the skills to succeed in future courses, projects, and other related work. Submitting work that is not your own, or allowing another student to submit your work as their own, does not contribute toward developing mastery. In addition, this deprives you of the ability to receive feedback and support from the course staff in addressing the areas in which you are struggling. For more information, consult the Allen School policy on academic misconduct. Permitted and Prohibited Actions¶ Sometimes the line between productive collaboration and academic dishonesty can be confusing. The following is a partial list of collaborative actions that are encouraged and prohibited. This list is not intended to be exhaustive; there are many actions not included that may fall under either heading. This list is here to help you understand examples of things that are/aren’t allowed. If you are ever unsure, please ask the course staff before potentially acting in a way that violates this policy. Encouraged The following types of collaboration are encouraged: · Discussing the content of lessons, sections or any provided examples. · Working collaboratively on solutions to practice problems or checkpoints. · Posting and responding to questions on the course message board, including responding to questions from other students (without providing assessment code; see below). · Describing, either verbally or in text, your approach to a take-home assessment at a high-level and in such a way that the person receiving the description cannot reliably reproduce your exact work. Such description should be in English or another natural human language (i.e., not code). · Asking a member of the course staff about concepts with which you are struggling or bugs in your work. Prohibited The following types of collaboration are prohibited and may constitute academic misconduct: · Looking at another person’s submission on a take-home assessment, or substantially similar code, at any point, in any form, for any reason, and for any amount of time. This restriction includes work written by classmates, family members or friends, former students, and online resources (such as GitHub or Chegg), among other sources. · Asking a chat bot (e.g., ChatGPT) to write answers to questions for you. · Showing or providing your submission on a take-home assessment to another student at any time, in any format, for any reason. · Submitting work that contains code copied from another resource, even with edits or changes, except for resources explicitly provided by the course staff. · Having another person “walk you through” work you submit, or walking another person through work they submit, such that the work produced can be entirely and reliably reconstructed from the instructions provided. (That is, submitting work that you produced simply by following instructions on what to write.) This restriction includes classmates, former students, family members or friends, paid tutors or consultants, “homework support” services (such as Chegg), etc. If you discuss a assignment with one or more classmates, you must specify with whom you collaborated in the header comment in your submission. You may discuss with as many classmates as you like, but you must cite all of them in your work. Note that you may not collaborate in a way that is prohibited, even if you cite the collaboration. Tip! A good rule of thumb to ensuring your collaboration is allowed is to not take written notes, photographs, or other records during your discussion and wait at least 30 minutes after completing the discussion before returning to your own work. You could use this time to relax, watch TV, listen to a podcast, or do work for another class. For most students, this will result in you only bringing the high-level concepts of the collaboration back to your work, and ensuring that you reconstruct the ideas on your own. Instead of utilizing forbidden resources, we hope you will submit whatever work you have, even if it is not yet complete, so you can get feedback and revise your work later. If you are ever in doubt if a collaboration or resources is permitted or not, please contact a member of the course staff. Amnesty¶ The course staff has endeavored to create an environment in which all students feel empowered and encouraged to submit their own work, regardless of the quality, and avoid prohibited collaboration. However, despite our best efforts, students may occasionally exercise poor judgement and violate this policy. In many cases, these students come to regret this decision almost immediately. To that end, we offer the following opportunity for amnesty: If you submit work that is in violation of the academic conduct policy, you may bring the action to the instructor’s attention within 72 hours of submission and request amnesty. If you do so, you will receive a reduced grade on just that assignment but no other further action will be taken. This action will not be shared outside of the course staff and will not be part of any academic record except in the case of repeated acts or abuses of the policy. This policy is designed to allow students who have acted in a way they regret the opportunity to correct the situation and complete their work in a permitted way. It is not intended to provide forgiveness for violations that are detected by the course staff, nor to be invoked frequently. It is still in your best interest to submit whatever work you have completed so that you can receive feedback and support. Software and Textbooks¶ Most of the course will be run on EdStem. This includes the checkpoints, as well as the discussion board and where you can complete coding portions of assignments. We will use EdStem to turn in checkpoints, and Gradescope to turn in the homeworks and learning reflections. There is no official textbook for the course yet (we are currently working on one!) so any readings or videos will be posted on the course website. One of the benefits of using EdStem is that it comes fully featured with an online programming environment. This means you do not need to install any software as long as you can work online! If you would like to set up Python for local development, we will use Python 3 installed via Anaconda on all programming assignments and Jupyter Notebooks are the editors we will use for our programming. You may use other installations of Python or IDEs, but the course staff may not necessarily be able to help you with any set-up related questions you may have. This quarter, we will use a number of different tools in CSE/STAT 416. Reach out to the course staff if you have questions about using any of them. Please see the Course Tools page for more information about the particular tools we will use this quarter and our policies surrounding them. Course Climate¶ Extenuating Circumstances: “Don’t Suffer in Silence”¶ We recognize that our students come from varied backgrounds and can have widely-varying circumstances. We also acknowledge that the Covid-19 pandemic is ongoing, which gives rise to challenging and unique situations. If you have any unforeseen circumstances that arise during the course, please do not hesitate to contact the instructor to discuss your situation. The sooner we are made aware, the more easily we can provide accommodations. Typically, extenuating circumstances include work-school balance, familial responsibilities, health concerns, or anything else beyond your control that may negatively impact your performance in the class. Additionally, while some amount of “productive struggle” is healthy for learning, you should ask the course staff for help if you have been stuck on an issue for a very long time. Life happens! While our focus is providing an excellent educational environment, our course does not exist in a vacuum. Our ultimate goal as a course staff is to provide you with the ability to be successful, and we encourage you to work with us to make that happen. Disabilities¶ Your experience in this class should not be affected by any disabilities that you may have. The Disability Resources for Students (DRS) office can help you establish accommodations with the course staff. DRS Instructions for Students If you have already established accommodations with DRS, please communicate your approved accommodations to the lecturers at your earliest convenience so we can discuss your needs in this course. If you have not yet established services through DRS, but have a temporary health condition or permanent disability that requires accommodations (conditions include but not limited to; mental health, attention-related, learning, vision, hearing, physical or health impacts), you are welcome to contact DRS. DRS offers resources and coordinates reasonable accommodations for students with disabilities and/or temporary health conditions. Reasonable accommodations are established through an interactive process between you, your lecturer(s) and DRS. It is the policy and practice of the University of Washington to create inclusive and accessible learning environments consistent with federal and state law. Religious Accommodations¶ Washington state law requires that UW develop a policy for accommodation of student absences or significant hardship due to reasons of faith or conscience, or for organized religious activities. The UW’s policy, including more information about how to request an accommodation, is available at Religious Accommodations Policy. Accommodations must be requested within the first two weeks of this course using the Religious Accommodations Request form.

$25.00 View

[SOLVED] 161777 Practical Data Mining 2025R

161.777 Practical Data Mining Project 2025 General information This project is assessed. Your work must be submitted by 11:59pm on 9th June 2025. There are questions on prediction and classification, as well as clustering and association rules. For the prediction and classification problems you should provide both the writeup of your methodology in the space provided in the project.Rmd file, as well as a .csv file for each exercise. In each case these should be the provided test set, with an additional column containing your predictions or classifications. Make sure that your predictions or classifications make sense. e.g. that your .csv file has the correct number of rows, and there is a clear column with your predictions. You may include your code to do the prediction and classification in the project.Rmd provided if you wish. Alternatively you may do your modelling work in a separate R script. or markdown file. This may save re-running code whenever you Knit your project.Rmd as you writeup your methodology and answer the exercises on clustering and association rules. Start by downloading the project file by Right clicking and Save File As… here: https://www.massey.ac.nz/~jcmarsha/161777/assessment/project.Rmd Then loading it into RStudio. Exercise 1: Predicting Arctic Ice Thickness [25 marks] This exercise is concerned with measurements of arctic ice thickness collected by scientific vessels in the Arctic Ocean. Each record corresponds to data collected over a single track (voyage between two points). Variable Description Year Calendar year MinDay First day of track (day of the year) MaxDay Last day of track (day of the year) MinLat Minimum latitude of track, in degrees MaxLat Maximum latitude of track, in degrees MinLon Minimum longitude of track, in degrees MaxLon Maximum longitude of track, in degrees Length Length of track, in km Nsamps Number of thickness measurements taken during track Thickness Mean ice thickness over track, in metres (target) The aim is to predict the target variable Thickness using information from the other 9 variables (the predictors). The training dataset contains 10,000 records (on all variables) while the test dataset contains 1,085 records (on all variables except for Thickness). You are to produce predictions for the Thickness variable on the 1,085 records in the test data, adding it as a column to the ice.test data frame. and writing to a .csv file. You can do this with write_csv(ice.test.with.predictions, "ex1_studentid.csv") replacing studentid with your student id code. Please make sure that the column with your predictions is named .pred. This is the default if you use tidymodels. Make sure you attach your ex1_studentid.csv file to your stream submission. You will be scored based on root mean squared error on the test set, worth a total of 15 marks. A score of 0 will correspond to utilising the mean of the target on the training set as your predictions. You should also writeup your methodology for answering this question in the section provided in the project.Rmd file. This writeup should include: · What, if any, data processing you performed. · What exploratory analysis you undertook. · Which modelling techniques you tried, including information on predictors chosen and any tuning performed. · EITHER: A statement that you did not use information from outside of the course, OR: A statement detailing what information from outside the course was used, where it was from, and how you learnt from that. · A clear statement of which model you chose, and why you chose it. Your methodology will be marked out of 10 marks. Exercise 2: Classifying Campylobacter [25 marks] This exercise is concerned with some Campylobacter isolates that were collected as part of the Source Assigned Case Control Study in New Zealand (SACNZ) project. There are 400 isolates of Campylobacter isolated from Beef, Sheep or Poultry sources (from faeces or meat). Each isolate was whole-genome sequenced with 1343 ‘core’ genes identified. For each isolate and each gene, sequences were scored based on sequence similarity, projecting each gene into two dimensional space, and a subset of 25 genes were selected. The training data thus consists of 50 numeric predictors named as CAMP0003.V1, representing the first score of the CAMP0003 gene, and the target Source, which takes the values Beef, Sheep or Poultry for these 400 isolates. In addition, we have a test dataset with 160 additional isolates where the Source is unknown. Based on previous work, we would expect that Poultry isolates are likely to differ compared to Beef and Sheep, but distinguishing Beef and Sheep may be difficult. You are to produce classifications for the Source variable on the 160 records in the test data, adding it as a column to the campy.test data frame. and writing to a .csv file. You can do this with write_csv(campy.test.with.classifications, "ex2_studentid.csv") replacing studentid with your student id code. Please make sure your classifications are named .pred_class. This is the default if you use tidymodels. Make sure you attach your ex2_studentid.csv file to your stream submission. You will be scored based on classification rate, worth a total of 15 marks. A score of 0 will correspond to picking sources at random. You should also writeup your methodology for answering this question in the section provided in the project.Rmd file. This writeup should include: · What, if any, data processing you performed. · What exploratory analysis you undertook. · Which modelling techniques you tried, including information on predictors chosen and any tuning performed. · EITHER: A statement that you did not use information from outside of the course, OR: A statement detailing what information from outside the course was used, where it was from, and how you learnt from that. · A clear statement of which model you chose, and why you chose it. Your methodology will be marked out of 10 marks. Exercise 3: Clustering the New Zealand World Values Survey [32 marks] We consider some data from the New Zealand version of the 2014 World Values Survey. The variables we consider are: Variable Description Female 1 means yes the person is female, 0 means male Age is in years StateofHealth is coded as 1=very good, 2= good, 3=fair, 4= poor. MaritalStatus 1= married, 2=living together as married, 3= divorced, 4=separated, 5=widowed, 6=single. HighestEducation is coded as 1= no secondary schooling, 2= some secondary, 3= finished secondary, 4=some tertiary, 5=degree. LifeSatisfaction is coded from 1 (least satisfied) to 10 (most satisfied) IncomeDecile is coded from 1 (least income) to 10 (most income) decile of the population SocialClass is coded by self-identification as 1= upper class, 2= upper middle class, 3= lower middle class, 4= working class, 5= lower class. BelieveinGod 1= yes, 0=no. FeelingofHappiness 1= very happy, 2=rather happy, 3=not very happy, 4= not at all happy. All other answers to these questions are coded as missing, NA. 1.  Do an initial exploration of the dataset: 2.  o  Showing plots and summary tables to examining the distributions of values of each variable, commenting on the distributions. 6 marks o  o  Some sorts of people may be more/less willing to do a Values survey. Comment on how this may bias the survey. 2 marks o  o  Which three variables have the most missing values? Suggest possible reasons for these. 2 marks o  3.  Prepare a new version of the dataset with all variables normalised. Comment on why these this step is appropriate prior to a cluster analysis for these data. 3 marks 4.  5.  Produce a dendrogram of the sample, selecting the variables Age, Female, MaritalStatus, StateofHealth, HighestEducation and LifeSatisfaction, after normalisation. Remove all missing values. Use hierarchical clustering with complete linkage. What do you think is the best number of clusters to divide the sample into. 4 marks 6.  7.  Perform. a kk-means cluster analysis of the new reduced, normalised dataset, based on Euclidean distances. Choose an appropriate value of kk within 1,2,…,10, and explain why you chose it. Include graphs of tot.withinSS and average silhouette width and silhouettes. Note there is no one perfect answer. 6 marks 8.  9.  Regardless of your answer to part 3, show a fviz_cluster graph, based on 4 clusters and complete hierarchical linkage. Does the graph show sufficient separation between the clusters? Explain. 3 marks 10.  11.  Assume four clusters, and use kmeans. Explore the differences among the clusters using boxplots. For each cluster, write a sentence or two characterizing the cluster in terms of the variables. 6 marks 12.  Exercise 4: Association Rules [18 marks] A vexillologist is a person who studies flags. In this exercise we will consider the colours of 211 flags of different countries. Certain colours seem to go together. 1.  Make a plot showing how frequently the various colours have been used. What are the top three most popular colours? 3 marks 2.  3.  Consider only rules where there are at least 5 countries have used the same colour combination (i.e. those with support parameter 0.023) and with the confidence cutoff set to confidence = 0.5. How many rules are found? 2 marks 4.  5.  Find the 5 rules with the highest support. Take the third one in this list and (i) explain the rule itself (i.e. what ‘itemset => itemset’ represents), and (ii) interpret the values of support, confidence, and lift for this rule. 6 marks 6.  7.  Find the rules for which lift is greater than 2. How many are there? Explain what these rules tell you. 3 marks 8.  9.  Suppose the island of Choiseul declares its independence. The leaders solicit ideas for a flag. Their main stipulation is that it must have black as a prominent colour. Answer the following, making sure you justify your answers from the data: 10.  ·  If there is going to be at least one other colour, then on the basis of the data, which colour do you think is most likely to be chosen? ·  ·  If the leaders were tending towards a four-colour flag including black, blue and red, what would be the most likely extra colour? 4 marks · 

$25.00 View

[SOLVED] MGMT2705 Industrial relations Term 2 2025

MGMT2705 Industrial relations Term 2, 2025 Assessment 1: Critical review presentation In the week selected, slides by 5pm (AEST) Wednesday prior to class % 30% Class presentation with 5 slides 5 slides, 5 mins 30 seconds maximum ( plus Q& A afterwards) Via turnitin assignment Description of assessment task This assessment is designed for each student to create a student-Ied discussion, every student wiII introduce and   criticaIIy evaIuate one articIe from the presentation readings Iisted for each week in your TutoriaI Guide. In Week 1, students wiII choose an articIe and sign up to do a presentation on it  the scheduIes for each cIass are on MoodIe. How you arrange your materiaI and what you incIude is up to you, but your presentation shouId: •     summarise for other students in your cIass the key arguments in the articIe •     demonstrate how the work reIates to other academic discussion on the topic (a minimum of three reIevant academic sources must be used to contextuaIise/augment the case made in your presentation articIe) •     reIate any key points of interest in the articIe •     contain one (1) question to pose to the cIass to encourage cIass debate  the presenter’s participation in debate is assessed as part of their participation mark. •     Note 3 students wiII have prepared to broadIy enter the debate each week. AII are encouraged to be part of the debate and discussion foIIowing the presentation. Submission instructions • Your presentation shouId be upIoaded to Turnitin before you present. • Referencing: Your sIides shouId contain in-text references with page numbers where appropriate and the finaI sIide shouId Iist your sources  incIuding your presentation articIe, a minimum of four (4) academic     sources from suggested course materials is required for this task. •     Students who choose irreIevant, outdated or‘doubIe-dipped’sources from other courses typicaIIy do not do weII in this assessment task. • Using academic sources to support your argument wiII be much more highIy regarded than using media/internet sources so Iet these have the most infIuence on your work. •     Make sure not to‘cut and paste’materiaI onto your sIides or into your presentation script   any pIagiarism wiII be viewed negativeIy by your marker. •     Students must do a presentation to get a mark for this assessment task. Please note: Using Kahoot! or other game quiz pIatforms is NOT encouraged. Assessment 2: Active Participation In your selected three (3) weeks prepare to engage broadly with the debate surrounding the prepared readings % 20% Materials/resources see course & tutorial guide Ask at least 1 question to the presenter and be able to lead the debate where needed. Note you must attend your assigned workshops (and 80% of all workshops) to be eligible for marks Not required, in class activity Description of assessment task We recognise that being an active learner and participant helps you to be more successful. Coming to class and engaging with the teacher and your peers is an important part of the lectures and workshop sessions. Preparation for and as part of the active participation in the overall course you need to engage with your classmates each week. This will provide you with the best foundation  to learn in MGMT2705 and it is what is expected in the workplace also. Make sure you take part inactivities, small  group discussion, short informal presentations to the class, answering questions, class  discussion. Simply attending the Workshops without getting involved in discussion  and activities is of little value either to you or your classmates and will result in a minimal  participation mark. This assessment is based on being able to engage with informed debate as critical in the workplace. You will nominate 3 weeks to prepare to engage deeply with the course readings. The quality of your tutorial contributions  throughout the term and your overall mark will reflect the level and regularity of reading effort and insight you were able to demonstrate in tutorials. Friendly, collegial debate will contribute to a pleasant and engaging learning experience for us all. •     Select 3 weeks, students are expected to read a minimum of one (1) of the presentation readings and some of the short class discussion readings. •    After the presenter has completed their work you will take an active role in the discussion with your classmates in the tutorials, demonstrating your knowledge and developing other’s understanding of each topic. •     Students should demonstrate that they have read an article by raising some new ideas/material from it.Marks will be awarded individually. Submission instructions In class no other submission is required. Supporting resources and links In completing your workshop task, you will have to undertake some research beyond the text by using suggested  readings/class materials. You should look to make the material ‘come alive’ and be relevant to your peers. Please note Wikipedia or general websites will not be sufficient for this task. Assessment 3: Enterprise agreement analysis August 15, 10am %   50% Answer sheet 2000 words +/-10% Description of assessment task In this assessment task, you wiII be asked to assess an enterprise agreement that was contentious and took a Iong time to finaIise. As an industriaI reIations practitioner, manager or empIoyee, it is vitaI that you understand how to read and interpret enterprise agreements because they reguIate the reIationship between the parties to each agreement and everyone who faIIs under the scope of an agreement. That said, even quite pIainIy-worded cIauses can be interpreted differentIy by the parties invoIved  hence, resoIution of industriaI disputes may require judiciaI determination and might turn on a mere word or phrase in an agreement. Even a comma can determine an outcome, as this articIe in the Guardian newspaper about a US case demonstrates. See here. As you can see by the host of media articIes incIuded on MoodIe (onIy a smaII seIection!), the bargaining process for a range of new EBAs is often a fractious process and engendered quite a Iot of pubIic debate. How wouId you expIain the process and eventuaI resuIt in this industriaI campaign? Was it most beneficiaI to management or workers? You be the judge. Note the EBA for anaIysis wiII be reIeased by week 8. FoIIow the directions beIow. Step 1: To begin this assignment, pIease review the media sources provided on MoodIe to get a generaI picture of the industriaI reIations environment Step 2: On MoodIe, you wiII find a copy of the seIected Enterprise Agreement to read. It outIines the new working conditions for empIoyees. Read this agreement, thinking about aspects of our course that are highIighted by different cIauses. Step 3: DownIoad the Q&A sheet that is Iocated on MoodIe. PIease submit your answers on this sheet so that we can reIate your answers to the reIevant question. Don’t paste the questions just the number and answers. Look to answer the questions fuIIy and insightfuIIy. A minimum of six (6) academic articIes are required to substantiate your answers and aII answers must be referenced. In this assessment you wiII appIy the tooIs and concepts you have deveIoped over the course to anaIyse an enterprise agreement. By considering benefits and trade offs and weII as the BOOT, you wiII be abIe to interpret and impIement enterprise agreements demonstrating soIid understanding of their workings. You wiII be required to undertake research to support your work using avaiIabIe resources through the UNSW Library cataIogues, databases and coIIections, or other appropriate sources. You can aIso use any materiaIs provided by the course. Any Generative AI use must be cited and wiII not be considered sufficient aIone. Extra advice: • Sometimes the answer is not obvious; you wiII have to think about it. •     For some questions, there is no‘right’answer; it wiII be a matter of opinion. However, some opinions have more vaIidity than others. Your response wiII be more convincing if you justify your position with materiaI and citations from reIevant research. •     It’s OK to‘googIe’information if you want to do so. However, think about the quaIity and authority of the sites you visit. Fair Work Commission reports and fact sheets, the broadsheet newspapers, ABC News coverage,  whiIe not entireIy free of bias, are better pIaces to start. •     Feel free to talk with your classmates about the answers but you must fill out your sheet independently, so that you don’t inadvertently copy from each other. •   This assignment must be referenced like any other assignment so make sure you provide intext referencing and attach a bibliography of all the sources you cited. Please see the writing advice on Moodle if you’re not  sure about referencing. To reference the EA, just put NAME EBA date, Clause X (y).

$25.00 View

[SOLVED] Benchmark and Comparison of State-of-the-Art Ontology and Vocabulary Repositories for Social Scie

Benchmark and Comparison of State-of-the-Art Ontology and Vocabulary Repositories for Social Sciences and Humanities Abstract: The increasing adoption of the Semantic Web in the Social Sciences and Humanities (SSH) has led to the development of numerous ontology and vocabulary repositories. These repositories serve as crucial resources for structuring, sharing, and reusing domain knowledge. This paper provides a benchmark and comparative analysis of leading repositories, evaluating  their scope, accessibility, interoperability, and usability. By analyzing platforms such as the Ontology Lookup Service (OLS), BioPortal, the Social Science Thesaurus, and other domain-specific repositories, we assess their relevance for SSH research. The study aims to guide students and researchers in selecting the most appropriate repository for their work.  Additionally, a practical implementation proposal for a bachelor's dissertation is outlined, focusing on ontology evaluation and integration within an SSH research framework. 1. Introduction The Semantic Web has significantly influenced knowledge management and data integration in various disciplines, including Social Sciences and Humanities (Berners-Lee et al., 2001). The use of ontologies and controlled vocabularies facilitates semantic interoperability, making repositories essential tools for researchers. However, with numerous  available repositories, a comparative analysis is necessary to determine the most suitable for SSH applications (Gandon, 2018). 2. Overview of Ontology and Vocabulary Repositories Ontology and vocabulary repositories provide structured knowledge representations that enhance data discovery and integration. The most notable repositories include: ● Ontology Lookup Service (OLS) – A service aggregating ontologies across multiple domains (Côté et al., 2006). ● BioPortal – Originally focused on biomedical ontologies but expanding to social sciences (Musen et al., 2012). ● LOV (Linked Open Vocabularies) – A repository for linked data vocabularies (Vandenbussche et al., 2017). ● Social Science Thesaurus – A specialized vocabulary for social science research (GESIS, 2020). ● BARTOC (Basic Register of Thesauri, Ontologies & Classifications) – A catalog of knowledge organization systems (Kempf et al., 2019). 3. Benchmarking Criteria To evaluate these repositories, the following criteria are considered: ● Scope and Coverage – The breadth of subjects covered within SSH. ● Interoperability – Compatibility with linked data and Semantic Web technologies (Heath & Bizer, 2011). ● Usability – The user interface and ease of access for non-technical researchers. ● Community Support and Maintenance – Frequency of updates and community engagement. ● Integration with Research Tools – Compatibility with RDF, SPARQL, and data visualization tools. 4. Comparative Analysis Each repository is assessed against the above criteria, highlighting strengths and weaknesses. For instance, while LOV excels in linked data integration, BioPortal offers robust ontology management tools but is less SSH-focused. The Social Science Thesaurus provides rich domain-specific terminologies but has limited interoperability features. 5. Implementation Proposal for Bachelor's Dissertation For a final dissertation, a student could undertake one of the following projects: 1. Ontology Evaluation: Assess the completeness and usability of a specific SSH ontology using competency questions (Grüninger & Fox, 1995). 2. Integration of Ontologies: Develop a prototype integrating multiple ontologies into an SSH research framework using RDF and SPARQL. 3. Enhancement of an Existing Repository: Propose improvements to an SSH vocabulary repository in terms of structure or usability. 6. Conclusion Selecting an appropriate ontology repository is crucial for SSH research. This study benchmarks leading repositories, offering insights into their suitability. For students, practical projects in ontology evaluation and integration provide valuable hands-on experience in Semantic Web applications. References ●    Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The Semantic Web. Scientific American. ●    Côté , R. G., Jones, P., Apweiler, R., & Hermjakob, H. (2006). The Ontology Lookup Service. BMC Bioinformatics. ●    Gandon, F. (2018). A Survey of the Semantic Web. Wiley-ISTE. ●    Grüninger, M., & Fox, M. S. (1995). Methodology for the Design and Evaluation of Ontologies. IJCAI Workshop on Basic Ontological Issues in Knowledge Sharing. ●    Heath, T., & Bizer, C. (2011). Linked Data: Evolving the Web into a Global Data Space. Morgan & Claypool. ●    Kempf, A., et al. (2019). BARTOC: A Registry of Knowledge Organization Systems. International Journal on Digital Libraries. ●    Musen, M. A., et al. (2012). BioPortal: Ontologies and Integrated Data Resources. Nucleic Acids Research. ●   Vandenbussche, P., et al. (2017). Linked Open Vocabularies (LOV): A Gateway to Reusable Semantic Web Vocabularies. Semantic Web Journal. ● GESIS. (2020). Social Science Thesaurus. Retrieved from https://www.gesis.org/en/research/thesaurus Literature review suggestion: ●    Meijer, Kerim, KNAW Humanities Cluster, and Menzo Windhouwer. "The CLARIAH FAIR Vocabulary Registry." CLARIN Annual Conference Proceedings. ●    Hartmann, Jens, Raúl Palma, and Asunción Gómez-Pérez. "Ontology repositories." Handbook on Ontologies (2009): 551-571. ●    Baclawski, Kenneth, and Todd Schneider. "The open ontology repository initiative: Requirements and research challenges." Proceedings of workshop on collaborative construction, management and linking of structured knowledge at the ISWC. 2009. ●    Atamanchuk, Viktoriia, and Petro Atamanchuk. "Ontological Modeling in Humanities." International Scientific-Practical Conference" Information Technology for Education,  Science and Technics". Cham: Springer Nature Switzerland, 2022. ● Veršić , Ivana Ilijašić , and Julian Ausserhofer. "Social sciences, humanities and their interoperability with the European Open Science Cloud: What is SSHOC?." Mitteilungen Der Vereinigung Österreichischer Bibliothekarinnen Und Bibliothekare 72.2 (2019): 383-391. Evaluation criteria (suggestion) The evaluation of ontology and vocabulary repositories in SSH is based on several key criteria: ●   Coverage and Completeness: The extent to which a repository covers the relevant domain concepts and relationships. ●   Semantic Consistency: The logical coherence and absence of contradictions within the ontology or vocabulary. ●    Usability and Accessibility: The ease of use, searchability, and availability of documentation for users. ●    Interoperability: The ability to integrate with other resources and systems, often measured by adherence to standards like RDF and OWL. ●    Maintainability and Sustainability: The long-term viability and update frequency of the repository. ●    Domain Specificity: The degree to which the repository is tailored to the specific needs of SSH research. ●   Community Engagement: The level of participation and contribution from the SSH community. Research questions: 1.   How do leading ontology repositories compare in terms of scope and coverage, interoperability, usability, community support, and integration with research tools for Social Sciences and Humanities (SSH) research ? 2.   How can multiple ontologies be integrated into an SSH research framework using RDF and SPARQL to enhance knowledge management and data integration? 3.  What improvements can be made to the structure and usability of existing SSH vocabulary repositories to better serve the needs of researchers ?

$25.00 View

[SOLVED] EC371 midterm examination outline Summer 2025

EC371 midterm examination outline. Summer 2025. Department of Economics. General information: Midterm is scheduled in class on June 3. The time of the exam is 75 minutes. Students are allowed to bring one hand-written help sheet, one sheet of paper (stapled multiple sheets are not allowed), double-sided, any size. Printed or photocopied or written by someone else help sheets will not be allowed. No phones and no smart watches, please.. Question format: The essay questions of the exam will be administered on paper (accounts for the 50% of the exam score), while the problem set(s) will be administered online (accounts for the 50% of the exam score), so please bring your laptops to the exam. There will be the help sheet allowed (requirements: only one sheet of any size, handwritten by students themselves only; no photocopies and no printouts). The midterm review is scheduled in class on June 2 at 4.30pm-7pm (in KCB 104, our regular classroom). My best wishes for the exam! What to read: 1) Chapters 3, 6, 8, 9.1-9.4 in Harris&Roach textbook. 2) Lecture notes and research papers on Blackboard. 3) Homework assignments – solutions (Blackboard , just click on homework test link). 4) Practice tests and quiz solutions (on Blackboard). Concepts to focus on: Chapter 3 and its two appendices: •    Externalities, definitions and types of externalities; •    Externalities as a market failure - market failure due to externality is a failure of the price mechanism to deal with external costs or external benefits resulting in a net social loss, overproducing under the negative (pollution) externality and underproducing under positive externality (i.e., ecological services provided by wetlands); •    Internalizing externalities via various policy tools (specific tax, quota, subsidy); •    Why reducing pollution levels to zero is unlikely to be economically efficient - use a graph showing a marginal benefit curve and a marginal cost curve to support your explanation (see the problem sets after chapter 3 in your textbook); •    A per unit pollution tax (Pigovian) and a per unit subsidy as policy tools to internalize an externality; •    The upstream tax – meaning and goal; •    Major reason why the profit tax would not be a correct tool to internalize a pollution externality; •    Be able to provide some examples of positive and negative externalities. •    Graphical representation and computation of the areas corresponding to marginal and total external costs, marginal private costs, marginal social costs, marginal private benefits, marginal social benefits; net social loss (deadweight loss), net social gain, market equilibrium; •    Two approaches to compute the social welfare in a graph of supply and demand (TWTP-TC = SW=CS+PS) •    Market failure as deviation from social efficiency; •    Efficiency as maximization of total net benefit (i.e., of the sum of consumer and producer surpluses). •    Holdout and free rider effects – definitions and examples. •    Efficiency and DWL under competition versus under monopoly: with and without pollution externalities •    The problem sets after chapter 3 in your textbook. •    The Coase theorem and its assumptions (i.e., wealth effects and transaction costs are both zero); •    Transaction costs – definition (costs of reaching an agreement and costs of negotiating); •    An example of the application of the Coasian theory in recent environmental policy making. •    Equity implications and Coasian approach to environmental externalities Chapter 6. What is non-market valuation; description of the following methods – replacement cost; contingent valuation method; travel cost method; hedonic price method.  Why do we need non-market valuation; which methods are direct and which are indirect– hedonic price; contingent valuation; travel cost methods? Three types of value, definitions and examples; willingness to pay and total economic value. Which non-market valuation method is capable of estimating use values and which – non-use values? Why the study of Lumpinee park in Bangkok utilizes two valuation methods (because the combination of the methods allows to obtain estimate of both, use and non-use values of the park). Chapter 8 Taxonomy of pollutants (both lecture notes and textbook). How do the marginal costs of pollution control and the marginal costs of pollution damage change as pollution levels decrease – graphical representation of the equimarginal principle (i.e., balancing MB & MC to obtain an efficient outcome). Pollution policies comparison (preference for quantity-based (regulatory standards & cap&trade system) over price-based (taxes) policies if pollutants are toxic or hazardous (methyl mercury that can cause a serious nerve damage) to avoid local hotspots (locally high levels of pollution surrounding a high-emitting plant); [note: somewhat confusing discussion in the book – suggests using tradable permits for methyl mercury; what is needed – a strict pollution standard]; technology-based regulation policies requiring firms to use a specific type of technology). Tradable permits: advantages and drawbacks; Pollution taxes & subsidies - advantages and drawbacks; Regulatory approach (i.e., Uniform. Emissions Standards) - advantages and drawbacks. The impact of technological change on pollution taxes and tradable pollution permits. Problem set after the chapter (homework answers are on BB in the test format). Chapter 9 (sections 9.1, 9.2, 9.3, 9.4 only) The definition of natural capital (i.e., the available endowment of natural resources including land, minerals, forests, water, soil air, fisheries, wildlife, ecological life-support systems (such as wetlands)). Concepts related to accounting for changes in natural capital in national accounting (gross investment = adding to productive capital over time; net investment = gross investment minus depreciation (monetary deduction in national accounting for the wearing-out of capital over time); natural capital depreciation =  monetary deduction in national accounting for loss of natural capital such a reduction in in the supply of timber, wildlife habitat , or mineral resources.) Example of a question: Cutting down forests and converting them to timber enter the national income accounts as a positive contribution to income, equal to the value of timber … Does the existing system of national accounting in the USA account for the loss of standing forest as an economic resource? Does the existing system of national accounting in the USA account for the loss of standing forest in terms of its ecological value? The concept of substitutability of natural and human-made capital (the ability of one resource or input to substitute for another; in particular, the ability of human-made capital to compensate for natural capital depletion); the concept of complementarity of natural and human-made capital (the property meaning that both, human-made capital and natural capital, are needed for effective production/consumption); The general principle of natural capital sustainability (conserving natural capital by limiting depletion rates and investing in resource renewal). The difference between strong (does not allow for any kind of substitution between natural and human-made capital, and therefore, natural capital levels should be maintained) and weak (allow for some substitution between natural and human-made capital and therefore natural capital depletion is justified as long as it is compensated for with increases in human-made capital) concepts of sustainability.

$25.00 View