Assignment Chef icon Assignment Chef

Browse assignments

Assignment catalog

33,401 assignments available

[SOLVED] MIME 473 Winter 2025 Assignment 5Haskell

MIME 473 – Winter 2025 Assignment #5 Due: 11:59pm, Feb 19 NOTE: This assignment requires you to write Lammps scripts and conduct molecular statics (MS) simulations. For questions requiring you to submit Lammps scripts, you will ONLY GET POINTS when the Lammps script. can run correctly and produce the numerical results you stated. Question 1 [ET.1]: [6pts] Consult tutorials on cohesive energy and vacancy formation. Based on the example on Ni, please do the following: (a) Similar to what’s done in the tutorial, construct a fully periodic, rectangular simulation box, but with , and (see Figure 1) being along [110],  [-112] and [1 -11] directions respectively. Conduct MS simulations to i) obtain the equilibrium cohesive energy E0 [1 pts]. ii) then create a single vacancy and obtain the vacancy formation energy [2 pts]. Please submit your Lammps script, named as YourMcGillID_Q1a.txt. (b) [2pts] Similar to question (a) above, but instead of removing a single atom to create a single vacancy, add an atom at an octahedral interstitial site to create a self-interstitial defect. Please conduct a MS simulation to obtain the formation energy of a self-interstitial defect, denoted as . Please submit your Lammps script, named as YourMcGillID_Q1b.txt. (c) [1pt] Based on your and values (and assume that they are not temperature dependent), if we know that, at T = 800 a single crystal Ni sample has a total of 1.5 × 1020 vacancies, please determine the number of self-interstitial defects in this sample. NOTE: assuming that both vacancies and self-interstitials are thermally activated, and there is no interaction between individual defects. You may check Self-exercise #1 if needed. For (a) and (b) above, please briefly explain what you did and discuss the simulation results you obtained. The explanation can weight as much as 1.5 pts). Also please ensure that you use the appropriate simulaiton dimensions for each direction. SPECIAL NOTE: Please ensure that you use CORRECT crystalline directions. Using wrong crystalline directions will automatically result in -50% penalty. Question 2 [DE.1]: [4pts] Consult tutorials, develop a LAMMPS script. and design a simulation process to i) construct a fully periodic rectangular simulation box with two corner points located at and (see Figure 2 for illustration); ii) within this simulation box, please create two atoms respectively located at (x, y, z) = and ; iii) the interaction between them is described by a LJ potential (please use parameters “pair_coeff 1 1 0.012 3.5 7.5”). Make use of the script. you created. Perform. MS simulations and answer the following questions. a) [3pts] Run for a sinlge step to obtain the potential energy Ep of this two-atom system from your simulation. Please submit your Lammps script, named as YourMcGillID_Q2a.txt; (Here briefly explain what you did and discuss the simulation results you obtained). b) [1pts] Perform. a “pencil-and-paper” calculation of the potential energy (please ignore the influence of tail function). Check to see if your hand calculation result matches with the simulation result. SPECIAL NOTE: Please ensure that you use CORRECT LJ parameters. Using wrong parameters will automatically result in -50% penalty. Solution: (NOTE: In your answers, please provide some necessary details, which can weight as much as 1.5 pts). Self-exercise #1: Calculate the number of vacancies per cubic meter in iron at 850°C. The energy for vacancy formation is 1.08 eV. Furthermore, the density and atomic weight for Fe are 7.65 g/cm3 and 55.85 g/mol, respectively. Self-exercise #2: Practice the following variations of Question 1 above. (i) Change the orientation , and to be ,  [111] and . How would the cohesive energy and vacancy formation energy change? (ii) Instead of putting the self-interstitial at the octahedral site, how about the tetrahedral site? How do we determine which one is the preferred site for the self-interstitial atom? (iii) Food for thought: what about a divacancy? For creation of a divacancy, it is similar to question 1(a) above, but instead of removing a single atom to create a single vacancy, remove two adjacent atoms to create a divacancy. Please conduct a MS simulation to obtain the formation energy of a divacancy, denoted as . is defined as , where Etot denotes the energy of the divacancy containing system and N denotes the number of atoms before creation of the divacancy.

$25.00 View

[SOLVED] Cap4770 –

Please make a single submission for your group through the “People > Group” section in Canvas. The submission should include two PDF files: 1. A PDF of your 10-page report. 2. A PDF of your code (Jupyter Notebook converted to PDF).Table of Contents Project A : Traffic 3 Objective: 3 Dataset: 3 1. Data Preprocessing 3 2. Exploratory Data Analysis (EDA) 3 3. Data Visualization 4 4. Model Building: Neural Network Implementation 4 5. Model Evaluation and Tuning 4 6. Analysis and Recommendations 4 Final Report: 5 Deliverables: 5 Project B : Retail 6 Objective: 6 Dataset: 6 Project Components & Key Insights: 7 1. Data Cleaning and Exploration: 7 2. Customer Segmentation 7 3. Customer Purchase Prediction: 7 4. Sales Forecasting: 7 5. Product Bundling: 8 Final Report: 8 Deliverables: 8 Project C : Insurance 9 Objective: 9 Dataset: 9 Data Preprocessing 9 Task 1: Data Cleaning and Initial Processing 10 Task 2: Exploratory Data Analysis (EDA) 10 Risk Segmentation 10 Task 1: Customer Segmentation 10 Task 2: Anomaly Detection 10 Predictive Modeling 11 Task 1: Classification Model 11 Task 2: Model Evaluation 11 Pattern Mining 11 Task 1: Association Rule Mining 11 Task 2: Sequential Pattern Analysis 11 Final Report: 12 Deliverables: 12Project A : Traffic Objective: Dataset: The NYC Taxi Fare Prediction Dataset provides over 55 million records of taxi trips in New York City from 2009 to 2015. Each record includes essential features like pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, and the target variable, fare_amount. This dataset is valuable for analyzing urban transportation patterns and building machine learning models to predict taxi fares based on spatial-temporal data. The combination of geographic coordinates and time-based features makes it particularly suited for tasks involving spatial and temporal prediction modeling. You can access the dataset on Kaggle (https://www.kaggle.com/competitions/new-york-city-taxi-fare-prediction/data).1. Data Preprocessing • Load and Clean Data: o Drop rows with missing values and remove outliers based on unrealistic fare amounts or locations (e.g., fares under $2 or pickup/drop-off points outside New York City). o Extract features from pickup_datetime, such as hour, day, weekday, and month. • Feature Engineering: o Distance Calculation: Calculate the distance between pickup and drop-off locations using the Haversine formula. o Geographical Clustering: Use K-means clustering to cluster the pickup and dropoff locations, assigning each coordinate to a region or cluster. o Time-based Features: Engineer features like peak hours or rush hours based on time. • Normalization: o Normalize or scale numerical features (e.g., distance, latitude, longitude) to improve model convergence. 2. Exploratory Data Analysis (EDA) • Univariate Analysis: o Visualize the distribution of target variable (fare_amount) and other key features, such as distance, pickup hour, and day of the week. • Bivariate Analysis: o Explore correlations between fare amount and distance, pickup time, and pickup/drop-off locations. • Geospatial Analysis: o Plot pickup and drop-off locations on a New York City map to observe hotspot areas for taxi rides and how they relate to fare amounts. 3. Data Visualization • Heatmaps: o Create heatmaps showing the concentration of pickups and drop-offs across NYC. • Fare Distribution by Time of Day: o Plot average fare amounts over different times of the day or days of the week. • Distance vs. Fare Scatter Plot: o Plot a scatter graph of trip distance against fare to identify the relationship and outliers.4. Model Building: Neural Network Implementation • Neural Network Architecture: o Design a neural network with the following layers: § Input layer: Takes in features (distance, pickup and drop-off clusters, day of the week, etc.). § Hidden Layers: Include a few dense layers with ReLU activation. § Output Layer: Single neuron for regression output predicting fare_amount. • Training and Validation: o Split the data into training and validation sets. o Use Mean Absolute Error (MAE) as the loss function to train the model. o Implement early stopping and learning rate scheduling to optimize training. Hint 1: Be careful for robust preprocessing and tuning of the neural network to avoid overfitting on this dataset.5. Model Evaluation and Tuning • Use the test set to evaluate the model’s performance. • Experiment with hyperparameters (number of layers, learning rate, batch size) to optimize the model.6. Analysis and Recommendations 1. Spatial-Temporal Analysis: Use geospatial clustering to identify high-density zones for pickups and drop-offs. This can reveal popular areas and times, such as airports, tourist locations, or business districts during rush hours. 2. Peak Demand Analysis: Analyze fare and trip volume distributions across different times of the day and week. High-fare periods could correlate with typical rush hours or weekends. 3. Fare Trends Analysis: Track average fares over different months or years to analyze if there’s seasonality in fare pricing. Final Report: A detailed report less than ten pages with the following outline: 1- Introduction: explaining the problem and dataset, and briefly describe your methodology, findings and insights. 2 – Data preprocessing: explaining all steps take for data cleaning and preprocessing, and feature engineering. 3 – Methodology: explaining model architecture, optimization policy, and training process. 4 – Experiments: discussion the process of tunning parameters and evaluating model performance through relevant metrics and error analysis. 6 – Conclusion: summarizing you findings and understanding of the problem. Deliverables: 1. Code base with step-by- step instructions. (.ipynb file, 70 points) 2. Final Report. (.pdf file, 30 points)Project B : RetailObjective: The project’s goal is to replicate a real-world situation in which you study consumer data from e- commerce to create predictive models and extract useful insights. Finding trends in consumer behaviour, maximising marketing initiatives, and guiding corporate plans to improve client retention, boost revenue, and customise marketing strategies are the objectives. You can make use of any open source libraries in python.Dataset: The UCI Machine Learning Repository’s Online Retail II dataset is a real-world dataset that includes transactional data from a UK-based online retailer. It records different facets of retail transactions, including item purchases, customer information, and sales dates. This dataset offers a comprehensive understanding of consumer buying patterns and can be used for a variety of analytics, such as product recommendation systems, sales trends, and customer segmentation. The dataset can be found here: https://archive.ics.uci.edu/dataset/502/online+retail+ii Key columns include InvoiceNo, StockCode, Description, Quantity, InvoiceDate, UnitPrice, CustomerID, and Country. InvoiceNo is a unique identifier for each transaction, while StockCode and Description provide product-specific information. Quantity and UnitPrice offer insights into the volume and value of each transaction, allowing for revenue calculations. CustomerID enables segmentation and customer-based analyses, while Country supports geographical segmentation. Project Components & Key Insights: 1. Data Cleaning and Exploration:• Analyse historical data, such as customer demographics, browsing sessions, product views, cart additions, and completed sales, as part of the initial dataset analysis. • Address missing values, eliminate duplicates, and fix irregularities. To determine the best times to shop, add new options like “Time of Day” and “Day of Week.” • To uncover significant aspects, use visualisation tools to investigate customer trends, distributions, and correlations. 2. Customer Segmentation Use clustering techniques (such K-Means or DBSCAN) to divide up your clientele based on their browsing habits, preferences, and purchasing habits. • Personalised Marketing: Make use of segments to advise tailored promotions (e.g., offering personalised discounts or suggesting products). • Optimised Communication Times: By examining when each consumer category is most engaged, you can ascertain the ideal times to interact with them.3. Customer Purchase Prediction: Using information such as time spent on the website, products viewed, and previous purchases, create a classification model to forecast whether a customer’s browsing session will end in a purchase.• Conversion-Boosting Strategies: To increase conversions, use forecasts to instantly present special offers or free shipping incentives to prospective customers.• When to Offer a Discount to Cart Abandoners: Determine the best times and kinds of discounts to entice cart abandoners to finish their purchases.4. Sales Forecasting: Use attributes such as goods in a cart, average transaction size, and frequency of purchases to create regression models that predict future sales. • Seasonal Discount Timing: To ensure steady revenue, pinpoint periods of low sales and recommend well-timed discounts or exclusive deals during offpeak hours. 5. Product Bundling:Using measures like support, confidence, and lift, use association rule mining (such as the Apriori algorithm) to identify products that are frequently purchased together.• Product Bundling: Provide discounts for combined items to raise the average order size and recommend product bundles based on consumer purchasing patterns.• Cross-Promotional Opportunities: Find related items and provide real-time, targeted marketing for customers (e.g., “Customers who bought X also bought Y”).Final Report: A detailed report less than ten pages with the following outline:1- Introduction: Explain the problem and dataset, and briefly describe your methodology, findings and insights. 2 – Data preprocessing: explaining all steps taken for data cleaning, preprocessing, and feature engineering. 3 – Methodology: Explain model architecture, optimisation policy, and training process. 4 – Experiments: Explain the logic behind using every algorithm that you have used during this project. 6 – Conclusion: Summarize your findings and understanding of the problem.Deliverables:1. Models for Clustering, Classification, and Regression: Including code and stepby- step instructions. (30 points) 2. Association Guidelines and Suggestions: Code, analysis, and recommendations for cross-promotions and product bundling. (40 points) 3. Final Report: An extensive document that includes all methods, conclusions, and practical business suggestions. (30 points)Project C : InsuranceObjective: The Vehicle Insurance Risk Profiling and Claim Prediction System aims to build an analytical framework to predict claims and assess risk profiles for insured vehicles. Using machine learning and statistical methods, this project will help insurance providers identify high-risk segments and improve claim prediction accuracy, optimizing underwriting and claims management. Covering the full data science lifecycle—from preprocessing to model development—the project will apply machine learning, clustering, and anomaly detection techniques to deliver insights on risk management and customer segmentation in insurance.Dataset: Link to the Dataset: https://www.kaggle.com/datasets/litvinenko630/insuranceclaims/data Key Features:1. Policyholder Information: This includes demographic details such as age, gender, occupation, marital status, and geographical location. 2. Claim History: Information regarding past insurance claims, including claim amounts, types of claims (e.g., medical, automobile), frequency of claims, and claim durations. 3. Policy Details: Details about the insurance policies held by the policyholders, such as coverage type, policy duration, premium amount, and deductibles. 4. Risk Factors: Variables indicating potential risk factors associated with policyholders, such as credit score, driving record (for automobile insurance), health status (for medical insurance), and property characteristics (for home insurance).Data Preprocessing Objective: Prepare the dataset by cleaning and engineering features to ensure quality inputs for analysis and modeling. Task 1: Data Cleaning and Initial Processing ◦ Handle Missing Values: Identify and handle missing values in numerical and categorical features. Use mean or median imputation for numerical fields and mode for categorical. ◦ Standardization of Engine Specifications: Normalize continuous features to ensure consistent units and scales. Encoding Categorical Variables: Use one-hot encoding or label encoding for categorical fields Task 2: Exploratory Data Analysis (EDA) ◦ Claims Distribution Analysis: Explore how claims (claim_status) vary across different types of vehicles and demographics. ◦ Correlation Analysis: Investigate relationships between continuous features and claim_status using correlation heatmaps. ◦ Regional Claim Analysis: Examine region_code and region_density to understand geographic patterns in claims. Risk Segmentation Objective: Segment customers and vehicles based on risk factors using clustering and anomaly detection techniques. Task 1: Customer Segmentation ◦ K-means Clustering: Group customers into risk profiles based on: ▪ Vehicle specs (e.g., vehicle_age, max_power, safety score) ▪ Customer demographics (customer_age, region_code, region_density) ▪ Policy characteristics (subscription_length) ◦ Hierarchical Clustering: Create nested risk categories within the broader clusters to capture nuanced risk profiles. Task 2: Anomaly Detection ◦ Isolation Forest: Detect unusual vehicle-claim patterns (e.g., high-power vehicles with no safety features having low claim rates). ◦ DBSCAN: Identify outlier claim behaviors in specific regions (e.g., regions with low claim frequency but high claim amounts).Predictive Modeling Objective: Develop classification models to predict claim_status and identify key risk factors. Task 1: Classification Model ◦ Develop a Neural Network to model intricate relationships in highdimensional data. ◦ Feature Importance Analysis: Identify which features contribute most significantly to predicting claim_status. Task 2: Model Evaluation ◦ Performance Metrics: Evaluate models using: ▪ Precision-Recall Curves for imbalanced data insight. ▪ ROC Curve and AUC Score to assess overall performance. ▪ Confusion Matrix for detailed accuracy and error analysis ▪ Pattern Mining Objective: Identify associations and sequences in claim-related behaviors for improved risk assessment.Task 1: Association Rule Mining ◦ Mining Relationships: Discover connections between vehicle features and claim frequency. ◦ Metrics: Calculate support, confidence, and lift to quantify rule strength and relevance for patterns like: ▪ High-power vehicles with low safety scores having frequent claims. ▪ Specific regions with higher claim incidences.Task 2: Sequential Pattern Analysis ◦ Subscription Analysis: examine how claim pattern evovles over subscription_length. ◦ High-Risk Feature Combinations: Identify combinations of features (e.g., old vehicle with no airbags in high-density regions) that correlate with higher claimsFinal Report: A detailed report less than ten pages with the following outline: 1. Introduction: Explain the problem and dataset, and briefly describe your methodology, findings and insights. 2. Data preprocessing: explaining all steps taken for data cleaning, preprocessing, and feature engineering. 3. Methodology: Explain model architecture, optimisation policy, and training process. 4. Experiments: Explain the logic behind using every algorithm that you have used during this project. 5. Discussion: Cover all requested tasks on visualisations and analyses, providing insights. 6. Conclusion: Summarize your findings and understanding of the problem.Deliverables: 2. Models for Clustering, Classification and Neural Network: Including code and step-by-step instructions. (40 points) 3. Pattern Mining: Code, analysis, and recommendations for cross-promotions and product bundling. (30 points) 4. Final Report: An extensive document that includes all methods, conclusions, and practical business suggestions. (30 points)

$25.00 View

[SOLVED] Arl – problem: finding the optimal policy in a markov decision process (mdp)

Task: As part of your mission, you will work on a fundamental problem in reinforcement learning: finding the optimal policy in a Markov Decision Process (MDP). This problem is crucial for understanding decision-making in uncertain environments, such as autonomous navigation and robotics. Background An MDP is a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision-maker. An MDP is defined by: States (nS): There are 16 states representing a 4×4 grid. Actions (nA): Four actions are available in each state, corresponding to moving West, South, East, and North. Transition Model (P): A dictionary where each state-action pair maps to a list of possible outcomes. Each outcome consists of: Probability: The probability of transitioning to the next state. Next State: The state reached after the action. Reward: The immediate reward received after taking the action (for simplicity here it will be determined only by the next state). Rewards (R): Immediate rewards for transitioning from one state to another. The transition model mdp.P is a two-level dictionary where the first key is the state and the second key is the action. The 2D grid cells are associated with indices [0, 1, 2, …, 15] from left to right and top to bottom, as in: [[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]] Action indices [0, 1, 2, 3] correspond to West, South, East, and North. mdp.P[state][action] is a list of tuples (probability, next state, reward). For example: – P[0][0] = [(0.25, 0, np.int64(0))] (state 0 is the initial state) For state 5, which corresponds to a hole in the ice, all actions lead to the same state with different rewards based on the value-table but they all return the same reward (so you need to figure out how to formulate the value table): P[5][0] = [(0.25, 4, np.int64(0))] P[5][1] = [(0.25, 9, np.int64(0))] P[5][2] = [(0.25, 6, np.int64(0))] P[5][3] = [(0.25, 1, np.int64(0))] For state 6 there are actions that lead to some bad state and others lead to a normal state: P[6][0] = [(0.25, 4, np.int64(-100))] P[6][1] = [(0.25, 9, np.int64(0))] P[6][2] = [(0.25, 6, np.int64(-100))] P[6][3] = [(0.25, 1, np.int64(0))] Here is an illustrative image of the Frozen Lake environment: In this environment: – S is the starting point. – F represents frozen surface where you can safely walk. – H represents holes, which are bad grids giving a negative reward. However, you can still move normally after falling into a hole. – G is the goal. You can start from any grid, even a hole, and the term ‘hole’ just indicates that it’s a bad grid with a negative reward. Ending up in a hole means receiving a negative reward, but you can continue moving normally from there. Problem Statement You are provided with a simulated environment representing a grid world (Frozen Lake) and an MDP defined by its transition probabilities and rewards. Your task is to implement the OptimalPath function to find the optimal policy that maximizes the expected reward. The goal is to use value iteration, a dynamic programming algorithm, to compute the optimal value function and the corresponding policy. Environment: Frozen Lake Overview The Frozen Lake environment is a classic problem in reinforcement learning where an agent must navigate a grid representing a frozen lake to reach a goal while avoiding holes. The grid is defined as follows: Grid Size: The environment is a (n x m) grid. States: Each cell in the grid represents a state, labeled from 0 to (n x m) in a row-major order (left to right, top to bottom). Actions: There are 4 possible actions the agent can take: 0: West (left) 1: South (down) 2: East (right) 3: North (up) Reward Structure In this environment, the rewards are defined to encourage the agent to reach the goal while avoiding holes: – Goal State: Reaching the goal yields a reward of 100. – Hole States: Falling into a hole yields a reward of -100. – Other States: All other states yield a reward of 0. Notes: Your initial position is undefined and you hve no prior knowledge about direction, your position and the position of the target You are required to return two arrays both of shape (n x m): 1. The Expected value (The Expected reward to get under some policy). 2. The policy I am following. Your code must converge after a sufficien number of iteration. HINT: the problem is greedy in the single iteration but it’s dynamic through the iterations. We are interested only in the final value and are not discounting future rewards. Instructions Just implement the step funciton in the step.py file. Code Structure and Intstructions This project implements a custom environment using OpenAI’s Gymnasium and solves it using Markov Decision Processes (MDPs). The environment simulates a Frozen Lake where the goal is to retrieve a frisbee while avoiding holes in the ice starting from any grid. Installation To get started, clone the repository and install the necessary dependencies. Prerequisites Python 3.8 or higher pip package manager Steps 1. Create a virtual environment (in the Coding_Assignment_02 Directory): sh python -m venv venv 2. Activate the virtual environment: On Windows: sh venvScriptsactivate On macOS and Linux: sh source venv/bin/activate 3. Install the dependencies: sh pip install -r requirements.txt Required: You can only edit the function step in step.py file. Complete your logic and submit your answer. Example Output plaintext Iteration | max|V-Vprev| | # chg actions | V[0] ———-+————–+—————+——–0 | 25.00000 | N/A | 0.00000 1 | 6.25000 | 3 | 0.00000 2 | 1.56250 | 2 | 0.00000 This output shows the progress of the value iteration algorithm in solving the MDP for the Frozen Lake environment. We will use this for testing. MDP class mdp: MDP The MDP class is a representation of a Markov Decision Process (MDP). Here’s a breakdown of its components: – P: A dictionary representing the state transition and reward probabilities. It is structured as a nested dictionary: – The first key is the state (an integer). – The second key is the action (an integer). – The value is a list of tuples, each representing a possible outcome of taking that action in that state: Probability: The probability of this transition. – Nextstate: The resulting state after taking the action. – Reward: The reward received after taking the action. – nS: The number of states in the MDP. For the Frozen Lake environment, this is 16. – nA: The number of actions available in each state. For the Frozen Lake environment, this is 4 (representing West, South, East, North). Resources to understand the problem MDP wiki Frozen Lake as MDP (Note that our definition is slightly different from the definition you will read here but it won’t make any difference when you come to implement your solution so relax and solve it as you understand it from previous resources.)

$25.00 View

[SOLVED] Arl – coding assignment 1 (stock growth prediction and visualization)

Problem Description: Suppose you have two stock values, Stock A and Stock B. Stock A and Stock B start at values 𑎠and ð‘ respectively. It’s guaranteed that the initial value of Stock A is less than or equal to the initial value of Stock B. Stock A grows rapidly and its value triples after every year, while Stock B’s value doubles after every year. After how many full years will the value of Stock A become strictly larger (higher) than the value of Stock B? In addition to calculating the number of years, you are required to visualize the rising of the stocks each year using Matplotlib. Input: The only line of the input contains two integers 𑎠and ð‘ (1 ≤ 𑎠≤ b ≤ 10) — the initial value of Stock A and the initial value of Stock B respectively. Output: Print one integer, denoting the number of full years after which the value of Stock A will become strictly larger than the value of Stock B. Visualization: Develop a function named plot_horizontal_bars that creates an animated horizontal bar plot for two bars named “A” and “B” representing stock A and stock B. The function should update the values of these bars over a specified number of iterations n which represents the number of years, and display the current year count count on the side of the plot as shown in the examples. Finally, The plotting stops when current year reaches the right value then saves the animation as a GIF file. Note: Each bar should be depicted with a different color line. Examples: Input: 4 7 Output 2 Visualization: Input: 4 9 Output 3 Visualization: Input: 1 1 Output: 1 Visualization: Notes: The provided example visualizations are for illustrative purposes. When implementing your solution, ensure to create similar plots based on the input values. Requirements: pip install matplotlib Instructions Implement your solution in the sol.py file. Don’t modify main function in the sol.py file.

$25.00 View

[SOLVED] Ista 331 assignment 8: penalized linear models

1. Introduction In HW 4, we fit linear and nonlinear models by minimizing the mean square error: MSE = 1 N X i (yi − yˆi) 2 This is a measure of how well the model fits the data. By fixing a function that depends on some parameters, ˆy = f(x; β), we can find the parameter vector β for which the RSS is smallest. We saw in HW 7 that this may be accomplished by gradient descent even when the model doesn’t allow us to solve the optimization problem directly.Note: you might conceptualize this as minimizing RMSE instead. The two give equivalent answers, but RSS is easier computationally because you don’t need to take the square root. Below, though, as we modify the cost function, we’ll specifically want RSS, not RMSE. 1.1. Improving models by shrinking parameters. It turns out we can often improve our models by making the entries of the parameter vector β smaller (closer to 0). This works in two ways: • Bias-variance tradeoff. We discussed in a lecture video the idea of bias-variance tradeoff. Ordinary least squares is an unbiased estimate. Preferring smaller coefficients introduces some bias but also reduces variance, and we can come out ahead in the tradeoff. • Interpretability. In some cases we may have a very large number of predictors, but only some of them are useful in making predictions. By removing some predictors, we may slightly reduce the variance of our model and also make it easier to understand.1.2. Bias-variance tradeoffs. Penalized regression models are all attempts to exploit the biasvariance tradeoff and improve the accuracy of future predictions. Error in future predictions is called generalization error and can, broadly, be broken down into three parts: • irreducible error : error due to the inherent random variability in the target variable. In general, there’s nothing we can do about this; the best possible model may still have some irreducible error. • bias: systematic error due to a model failing to capture a feature or relationship in the data. This is generally a result of incorrect assumptions, using a model type that isn’t well suited to the task, or using a model that doesn’t have enough flexibility to describe the data – in short, it is underfitting the training data. Bias can often be reduced by increasing the number of parameters and thereby allowing the model to fit the training data more closely. • variance: random error due to a model’s sensitivity to the training data. A highly flexible model (one with a lot of parameters) may produce very different fits for different samples of training data. If this happens, this means the model is fitting the random variation in the training data rather than fitting the true relationship – in short, it is overfitting the training data. Variance may be reduced by using a simpler model.More complex models often reduce bias but increase variance. This is known as the bias-variance tradeoff. It is a provable fact that if there is truly a linear relationship between a set of predictors x and a target variable y; that is, y = β0 + β1×1 + . . . + βmxm + noise then ordinary least squares (OLS) gives unbiased estimates of the parameters β, and moreover, OLS has the lowest variance among all possible unbiased models.1 However, the surprising fact is that the best possible unbiased model is not necessarily the best possible model overall. Because of the bias-variance tradeoff, it may be the case that introducing a modest amount of bias can improve generalization error. This is especially true when the training data is limited, which is when the variance of OLS will be highest. 1.3. Ridge regression. Ridge regression is the most “classical” of the common penalized linear models. In this model, we replace the usual cost function for ordinary least squares with Lridge(β) = MSE + α X m β 2 m where α is a tuning parameter. (In some literature this α is denoted λ. I’m using α here because the sklearn implementations call it alpha.) 1.4. Lasso. The lasso method is similar to ridge regression, but uses the cost function Llasso(β) = MSE + α X m |βm|This is a similar basic idea: penalizing the total size of the coefficients. Using the sum of absolute values instead of the sum of squares of the coefficients leads to two major differences: • Bad news: it’s harder to solve the optimization problem. In ridge regression, if the X variables are transformed correctly, there is an algebraic solution for the best coefficient vector. This is not true for the lasso; there is no exact solution. Thus it must be solved using gradient descent or another iterative optimization method. • Good news: the lasso can do something that ridge regression can’t. In ridge regression, no matter how big α is, the model will never set any coefficients to 0. On the other hand, the lasso method can result in some coefficients βi being set to 0. Why is this good? It means that the lasso can perform the task of variable selection: it can suggest which predictors should be dropped from the model entirely. When we have a large number of predictors and it’s not obvious which ones are useful, the lasso can help us decide which to use and which to drop. 1.5. Elastic net regularization. Elastic net regularization is an attempt to combine the ridge and lasso methods. The cost function for elastic net is Lnet(β) = tLlasso(β) + (1 − t)Lridge(β) where 0 ≤ t ≤ 1 is a tuning parameter that controls the mixture of the two cost functions. When t = 0, this is just a ridge; when t = 1, it is just a lasso. 1This fact is called the Gauss-Markov theorem.Note that we still have the tuning parameter α “hidden” in this cost function, so there are two parameters to tune. 1.6. Models in sklearn. Each of the above models is present in sklearn.linear model, under the classes LinearRegression (OLS), Ridge, Lasso, and ElasticNet. Documentation for these can be found at • https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression. html • https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge. html • https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso. html • https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet. html The important parts are that each expose fit and predict models that we can use for model fitting and for making predictions, and the three regularized models take a parameter alpha (passed at model initialization). ElasticNet also takes a value for the parameter t under the name l1 ratio.2. Instructions 2.1. Code and submission. Create a Python module called penalized.py and upload it to the assignment folder on D2L. Do the following imports: import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.linear_model import ElasticNet from sklearn.model_selection import cross_val_score from sklearn.metrics import mean_squared_error Then implement the functions below. Your main function must run when I type python penalized.py at the terminal. 2.2. Documentation. Your script must contain a header docstring containing your name, ISTA 331, the date, and a brief description of the module. Each function must contain a docstring. Each function docstring should include a description of the function’s purpose, the name, type, and purpose of each parameter, and the type and meaning of the function’s return value. 2.3. Grading. Your submission will be human-graded. I’ll run the python script with python penalized.py and look at the output produced, and then read the code. Make sure that the script runs! I will deduct points if I have to chase down syntax errors to get output from your code, or add a call to main, etc. Code should be clear and concise, but you will only lose style points if your code is a real mess. Include inline comments to explain tricky lines and summarize sections of code. 2.4. Collaboration. Collaboration is not permitted on this assignment. However, you may use your own notes, documentation for sklearn, numpy, and pandas, and any resources available on the D2L site (including videos, worksheets, Jupyter notebooks, etc.).3. Function specifications • get frames: This function takes no arguments, loads the Boston housing data set from Boston Housing.csv and splits it into training and testing X and y data frames/series. Set the random seed using np.random.seed(95) and then use np.random.choice to select 100 rows from the data frame (without replacement!) to be the training set. The remainder of the data frame will be the testing set. The target variable is the median housing price, MEDV, so drop this from your training and testing frames to produce the training and testing X; and take train[’MEDV’] and test[’MEDV’] to be your training and testing y. Return the training X, training y, testing X, and testing y. • best ridge: This function takes a training X, a training y, and an array or list of alpha values to test. Use 5-fold cross-validation, using cross val score with the scoring neg mean squared error (see HW6 if you need a refresher on this), to find which value of alpha performs best for a ridge regression. You can initialize a ridge regression model using Ridge(alpha = xxxx). Return the best value of alpha out of the ones considered. • best lasso: Same as the previous function, but create a lasso model using Lasso(alpha = xxxx). • best net: Same as the previous functions, but create an elastic net model using ElasticNet(alpha = xxxx). The tuning parameter which I called t above is called l1 ratio here, but you can leave it at the default value of 0.5. • main: Load the data frames and split them into training and testing data. Use best ridge, best lasso, and best net to find good values of alpha. It’s up to you to decide what values of alpha are reasonable to try, and this may require a bit of experimentation. Then initialize one instance of each model type, using the alpha values you found. Once the models have been trained, evaluate them all by calling predict with your testing X. Calculate the RMSE of the predictions relative to the testing y values. Print your results in the following format: Summary: Ordinary Least Squares RMSE: xxxx Ridge Regression Best alpha value: xxxx RMSE: xxxx Lasso Best alpha value: xxxx RMSE: xxxx Elastic Net Best alpha value: xxxx RMSE: xxxx The best test accuracy was achieved by: [model name] where xxxx is replaced by the appropriate numerical value and [model name] is the model which gave the smallest RMSE on the testing set. (This is human graded so don’t worry about getting the formatting perfect.) A small part of your score will come from finding good parameter values for each model and determining which model appears to perform best in this scenario.

$25.00 View

[SOLVED] Ista 331 hw 7: linear classifiers

1.1. Introduction. In this homework, we explore three classification methods based on linear models. The first is logistic regression, which is a modification of the standard least-squares linear model. The others are two types of support vector machine.All of these can be trained by stochastic gradient descent. (We didn’t need iterative methods for ordinary least squares, but the logit transformation in logistic regression changes the optimal solution, and the result is no longer something we can calculate directly with linear algebra.)Our application here is the famous MNIST handwritten digit data set, one of the classic benchmark problems in machine learning. The data is a collection of 70,000 images of handwritten numerical digits, 0-9, collected from US Postal Service scans. The objective is to train an algorithm that is able to correctly identify which digit an image represents. Here are a few examples of images from the data set:1.2. Instructions. Create a module named hw7.py. Start your module with the following imports, then code up the functions as specified below, and upload your module to the D2L Hw7 assignments folder. from sklearn.linear_model import SGDClassifier, LogisticRegression from sklearn.svm import SVC from sklearn.metrics import confusion_matrix import numpy as np import matplotlib.pyplot as plt import scipy.io as sio1.3. Testing. Download hw7 test.py, mnist-original.mat, and auxiliary testing files and put them in the same folder as your hw7.py module. Each of the 6 functions not including main and plot probability matrices is worth 16% of your correctness score. main and plot probability matrices is worth 20% of your grade. You can examine the test module in a text editor to understand better what your code should do. The test module should be considered part of the spec. 1.4. Documentation. Your module must contain a header docstring containing your name, your section leader’s name, the date, ISTA 331 Hw7, and a brief summary of the module. Each function must contain a docstring. Each docstring should include a description of the function’s purpose, the name, type, and purpose of each parameter, and the type and meaning of the function’s return value.1.5. Grading. Your module will be graded on correctness, documentation, and coding style. Code should be clear and concise. You will only lose style points if your code is a real mess. Include inline comments to explain tricky lines and summarize sections of code.1.6. Collaboration. Collaboration is allowed. You are responsible for your learning. Depending too much on others will hurt you on the tests. “Helping” others too much harms them in reality. Cite any sources/collaborators in your header docstring. Leaving this out is dishonest. 1.7. Resources. • https://www.openml.org/search?type=data • https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier. html • https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression. html • https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html 2. Function specifications • get data: this function takes no arguments and returns two arrays: X, a 70, 000×784 2D array, and y, a 1D array with 70,000 elements. To see how to get the data, open a shell and type in these commands: import scipy.io as sio mnist = sio.loadmat(’mnist-original.mat’) Of course your current working directory will have to contain the file. There are other ways that you might figure out to load the data, but bear in mind that if your code won’t run because a server is down, you get 0 points. Explore the mnist variable to find what you need to return as your X and y. To get the shapes of the two arrays right you might have to do a little work on them before returning them. • get train and test sets: this function takes X and y as created by the previous function. The first 60,000 instances of each are for training, the rest for testing. The function breaks both into separate training and testing X’s and y’s. Use np.random.permutation to get a shuffled sequence of indices and then uses these indices to shuffle the training X’s and y’s. It then returns the training X, the testing X, the training y, and the testing y, in that order. • train to data: this function takes a training X, a training y, and a string containing the name of the model. If the model name is ’SGD’, make a SGDClassifier with max iter = 200 and a tolerance of 0.001 (Remember the default SGDClassifier is a linear SVM). If the model name is ’SVM’, make a SVC with kernel = ’poly’. Otherwise, make a LogisticRegression with multi class = ’multinomial’ and solver = ’lbfgs’. Then, fit the model to training data. All of these classifiers expose a fit method that behaves like that for DecisionTreeClassifier, etc. – For the SGDClassifier, use the first 10,000 elements of the training set. – For the LogisticRegression, use the entire training set. – For the SVC, use the first 10,000 elements of the training set. This is just to save time. In real life you’d use all 60,000 elements of the training set, but SVC especially can be a bit slow fitting to larger data sets, and performs pretty well with only 10,000 examples. This makes the shuffling process really important, though, because if the training set is ordered then this sample of 10,000 will only contain 0’s and 1’s! Fit the model to the data and return it. (Note: the LogisticRegression will give a warning about the solver not converging. Don’t worry about it.) • get confusion matrix: this function takes a model, an X, and a y. Use the model’s predict method to obtain predictions for this X and make a confusion matrix out of the y vector and your predictions. Return the matrix.• probability matrix: this function takes a confusion matrix and returns a probability matrix. Do not modify the original confusion matrix. Remember, row i of the confusion matrix contains the predictions for all of the digits that were actually i. Make and return a new matrix of floats where each number in position [i, j] is the estimated conditional probability that j was predicted given that i was the label (correct value). Round your probabilities to 3 decimal places (for prettier printing later). • plot probability matrices: this function takes three probability matrices and produces a plot that looks like this when displayed (do not call plt.show in this function in your submitted version; you’ll call plt.show in your main function instead.): You will need to use plt.subplots to create the layout and axes.matshow to display the matrices. This plot is worth 20 pts. It will show up when you run the test, but the test doesn’t grade it. The test calls your main, which must work right for the plot to show up.• main: get your data, split it into training and testing sets, and train a SGDClassifier, a LogisticRegressionmodel, and an SVC using your train to data function. Get confusion matrices for each of them, then probability matrices. Make your matrices plot. Put these lines in your code, which will print your matrices in a nice format: for mod in ((’Linear SVM:’, probability_matrix(sgd_cmat)), (’Logistic Regression:’, probability_matrix(soft_cmat)), (’Polynomial SVM:’, probability_matrix(svm_cmat))): print(*mod, sep = ’n’) Here, of course, sgd cmat, etc., are the confusion matrices from each of the three models. Rename variables as needed.Finally, call plt.show. Once you have written these functions, you have completed the graded form of the assignment. However, to get a feel for the creatures you have created, I encourage you to try the following function, plot example. This function takes three or four arguments: an X, a y, a list of three models (intended to be one of each class), and an index with a default value of None, and displays an image from the MNIST data set, along with its correct label and the three predictions. If you run this a few dozen times (try it in a loop in a Jupyter notebook) you can see what the real life data looks like, and some of the patterns on which the models fail. Sometimes they (esp. the linear SVM) make weird errors, sometimes they are bad at recognizing rare forms (e.g. crossed 7’s), and sometimes the digits are just kind of messed up and hard to ID even for a human! def plot_example(X, y, model_list, index = None): if index is None: index = np.random.choice(np.arange(X.shape[0]), 1)example = X[index, :] prediction_list = [model.predict(example) for model in model_list] plt.matshow(example.reshape(28, 28), cmap=plt.cm.gray) plt.title(’Correct label: ’ + str(y[index]) + ’nLinear SVM Prediction: ’ + str(prediction_list[0]) + ’nLogistic Regression Prediction: ’ + str(prediction_list[1]) + ’nPolynomial SVM Prediction: ’ + str(prediction_list[2])) plt.show()

$25.00 View

[SOLVED] Ista 331 hw 6: trees for classification and regression

In this assignment, you’ll train and evaluate some decision trees and random forests for classification and regression tasks.2.1. Code and submission. Create a Python module called hw6.py and upload it to the assignment folder on D2L. You will need the following import statements: import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import cross_val_score from sklearn.metrics import confusion_matrix Then implement the 9 functions described in the specifications below.2.2. Documentation. Your script must contain a header docstring containing your name, ISTA 331, the date, and a brief description of the module. Each function must contain a docstring. Each function docstring should include a description of the function’s purpose, the name, type, and purpose of each parameter, and the type and meaning of the function’s return value. 2.3. Testing and required files. You’ll need the test hw6.py test script, and the following data files: training.csv, testing.csv, bikes.csv.2.4. Grading. Your submission will be graded by running it through the test script, examination of the plots, and examination of your code. Code should be clear and concise, but you will only lose style points if your code is a real mess. Include inline comments to explain tricky lines and summarize sections of code. The test script grades the returned values of all functions below except for plot confusion matrix (which doesn’t return anything). These are, collectively, worth 80% of your grade. The remaining 20% is based on the plots.2.5. Collaboration. Collaboration is allowed. You are responsible for your learning. Depending too much on others will hurt you on the tests. “Helping” others too much harms them in reality. Cite any sources/collaborators in your header docstring. Leaving this out is dishonest.3.1. Classification trees. • get classification frames: this function takes no arguments. It creates a DataFrame from the training.csv and testing.csv file. Use only the first 10 columns. The data frame should look like this: This data is satellite-based spectroscopy data, attempting to classify four types of land in Japan. The classes are ’s’ (sugi forest), ’h’ (hinoki forest), ’d’ (mixed deciduous forest), ’o’ (non-forest land).1 • get X and y: this function takes a frame and returns a DataFrame of predictors and a vector (Series) of class labels. • make and test tree: this function takes 5 arguments: a training X, a training y, a testing X, a testing y, and a maximum depth. Initialize and fit a DecisionTreeClassifier on the training data with the given maximum depth. Return a confusion matrix measuring the accuracy of the model on the testing data.• plot confusion matrix: this function takes the same 5 arguments as the previous function. Get a confusion matrix from the previous function and display it using plt.matshow. Pass the parameter cmap = plt.cm.gray. Don’t call plt.show() in this function; you’ll call it later in main. The resulting confusion matrix plot will look something like this: 1Suki and hinoki are types of trees. Not our kind of trees.3.2. Regression trees. • get regression frame: this function takes no arguments and creates a DataFrame from the bikes.csv file. The data frame should look like this: This is data from a bike sharing program in London. Our target variable is casual, the daily number of non-registered riders. • get regression X and y: this function takes the frame created by get regression frame and splits it into training and testing X and y. Use np.random.choice to select a random sample of 15000 instances to be the training set. Take the rest to be the testing set. For X, include the predictors [’season’, ’holiday’, ’workingday’, ’weathersit’, ’temp’, ’atemp’, ’hum’, ’windspeed’]; for y, use the target variable ’casual’. Return training X, testing X, training y, testing y in that order.• make depth plot: this function takes four arguments: a X and a y, a maximum depth n, and a keyword representing the model type, either ’tree’ or ’forest’. For each integer i between 1 and n inclusive, initalize a model with max depth = i. Make a DecisionTreeRegressor if the keyword is tree, or a RandomForestRegressor if the keyword is ’forest’. Use cross val score with cv = 5 (five-fold-cross-validation) and scoring = ’neg mean squared error’ to evaluate the model on the training data. This uses negative MSE to evaluate the model (negative to make it a ‘score’ instead of a ‘cost’). Note the mean and standard deviation of the 5 scores. When you have evaluated models for all i from 1 to n, plot the average score against i. Don’t call plt.show here; you’ll call it later in main. Put error bars on the plot using the standard error of the scores (standard deviation divided by square root of # of scores). A plot should look something like this: Return the value of i that produced the best score.• compare regressors: This function takes a training X, a training y, testing X, a testing y, and a list containing a DecisionTreeRegressor and a RandomForestRegressor (already fit to the training data). For each model, compute its MSE on the training set. Then compute its R2 using the formula R 2 = 1 − MSE V ar(y) V ar means variance, which you can calculate with np.var. Next, calculate the model’s RMSE on the testing set. Print the results in the following format: ———————————– Model type: DecisionTreeRegressor Depth: xxxxx R^2: xxxxx Testing RMSE: xxxxx ———————————– Model type: RandomForestRegressor Depth: xxxxx R^2: xxxxx Testing RMSE: xxxxx where the xs are replaced by the actual values. Round the floats to 4 decimal places. 3.3. Main function. • main: get your frames, split them into training and testing data. Plot confusion matrices for the classification task with max depth of 1, then max depth of 5. Call plt.show() after each call to plot confusion matrix.Make the depth plot for the regression data for each type of model (tree and forest), testing depths from 1 to 15. Call plt.show() after each call to make depth plot. Finally, create and fit a DecisionTreeRegressor and a RandomForestRegressor on the training data using the best value of max depth found by the depth plots. Call compare regressors to compare the two models.

$25.00 View

[SOLVED] Ista 331 hw2 a problematic spam filter

Introduction. The first AI application that impacted everyone’s daily life was the spam filter. In this assignment, you will use Naïve Bayes to implement a simple spam filter, a technique still in use in many filters, https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering. Ours is overly simple, resulting in some serious drawbacks, but the basics are there. Instructions. Download get_data.py, put it in your hw2 folder, and run it. Create a module named hw2.py. Below is the spec for two classes containing 10 methods and a main function. Implement them and upload your module to the appropriate D2L Assignments folder. Testing. Download hw2_test.py and the auxiliary files and put them in the same folder as your hw2.py module. Run it from the command line to see your current correctness score. Each of the 11 methods/functions is worth 9% of your correctness score. You can examine the test module in a text editor to understand better what your code should do. The test module is part of the spec. The test file we will use to grade your program will be different and may uncover failings in your work not evident upon testing with the provided file. Add any necessary tests to make sure your code works in all cases. Documentation. Your module must contain a header docstring containing your name, your section leader’s name, the date, ISTA 331 Hw2, and a brief summary of the module. Each function must contain a docstring. Each function docstring should include a description of the function’s purpose, the name, type, and purpose of each parameter, and the type and meaning of the function’s return value. Grading. Your module will be graded on correctness, documentation, and coding style. Code should be clear and concise. You will only lose style points if your code is a real mess. Include inline comments to explain tricky lines and summarize sections of code. Collaboration. Collaboration is allowed. You are responsible for your learning. Depending too much on others will hurt you on the tests. “Helping” others too much harms them in reality. Cite any sources/collaborators in your header docstring. Leaving this out is dishonest. Resources. https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html https://scikit-learn.org/stable/modules/feature_extraction.html https://stackoverflow.com/questions/24647400/what-is-the-best-stemming-methodin-python https://en.wikipedia.org/wiki/Stemming https://scikit-learn.org/stable/modules/generated/ sklearn.metrics.confusion_matrix.html https://en.wikipedia.org/wiki/Confusion_matrix https://docs.python.org/3/library/collections.html Necessary import statements: from sklearn.feature_extraction import stop_words from nltk.stem import SnowballStemmer from sklearn.metrics import confusion_matrix Also import math, string, random, and os. Class, method, and function specifications. class LabeledData: Look at the test file in an editor. Implement the functions in the order • parse_line • parse_message • __init__ The point of LabeledData objects is to hold an N x 1 data matrix X of emails (as a list of strings) and a labels vector (also a list) y, also of length N, containing 0’s where the corresponding emails are ham and 1’s where they are spam. __init__: This instance method takes two strings with default arguments of ‘data/ 2002/easy_ham’ and ‘data/2002/spam’, respectively. The first one is a relative path to a corpus of ham emails; the second is a relative path to a corpus of spam emails. The default corpuses will be our training data. It also takes a data matrix and a labels vector with default arguments of None. Name their parameters X and y, respectively. The instance variables in the objects created by this method will also be named X and y. • If the parameter X is None, set the instance variable X to a list of parsed emails from the ham directory followed by the spams. In other words, you will have to traverse the filenames in those directories, pass each filename to parse_message (below), and append the parsed email to the list you are building. y should be a list of the same length as X containing 0’s in the same positions as the ham emails in X and 1’s corresponding to the spams. If the parameter X is not None, then instead assign the instance variables to the corresponding parameters. parse_message: This instance method takes the name of a file containing an email with the following format:  A line starting with From that has no colon.  A bunch of header lines that start with a word followed by a colon, then more text.  The subject line, which we want to grab, that starts with Subject:  More header lines.  A blank line.  The rest of the email, from which we want to extract the text, except for header lines from previous emails, which have the format: whitespace, a word ending in a colon, more text. Example: From [email protected] Thu Aug 22 12:36:23 2002 Return-Path: Delivered-To: [email protected] Cc: [email protected] Subject: Re: New Sequences Window In-Reply-To: [email protected] Date: Thu, 22 Aug 2002 18:26:25 +0700 Date: Wed, 21 Aug 2002 10:54:46 -0500 Message-ID: [email protected] You can see as many examples as you like in your data folders. You will return a string containing a cleaned version of the email. Use the following steps: • Open the email file with the following keyword arguments: errors = ‘ignore’, encoding = ‘ascii’. This will strip all non-ASCII characters (for test repeatability, because special character handling can vary by OS and Python version). • Ignore all of the header except for the subject line. (Hint: you can look for the first blank line to locate the end of the header.) • Concatenate the tokens from the subject line to the string you’re building separated by spaces except for the Re:’s (make sure you skip re: no matter what case it is). • For each of the rest of the lines, pass the line to static method parse_line and concatenate a space and the returned value onto the string your building, unless the string is empty. If the string is empty and the returned value is not, set the string to the returned value. Call the static method thusly: LabeledData.parse_line(line). parse_line: This is a static method takes a line from an email and returns stripped line, unless it’s a header line, in which case, return the empty string. To make a static method, put @staticmethod on the line before the method signature and do not include the self parameter. This is a way of including a function definition within a class that is not tied directly to specific instances. class NaiveBayesClassifier: __init__: This instance method takes a LabeledData object, a pseudocount with a default value of 0.5, and a hyperparameter called max_words that limits the number of tokens used in classifying an email with default value 50. To initialize your NaiveBayesClassifier: • Assign the LabeledData object to an instance variable called labeled_data and the token limit to an instance variable called max_words. • Assign a SnowballStemmer(‘english’) object to an instance variable called stemmer. • Count and store the number of spams and hams. • Store a word count dictionary, obtained by calling the count_words instance method, in an instance variable called word_probs. • Finally, process the word counts into estimated probabilities. Loop over the dictionary, replacing all of the frequencies with estimated probabilities according to the following formula (k is the pseudocount): freq+k N+2 k tokenize: This instance method takes a string representing an email and returns a set representing a vector of lowercased, stemmed tokens, excluding any stop words. Make the token vectors case-insensitive by lowercasing the email message. Replace all occurrences of “n’t” with the empty string (the stemmer doesn’t handle these well). Replace every character in string.punctuation with the empty string. Replace all digits with the empty string. Finally, split the resulting string, pass the individual words to the instance’s stemmer, and add the returned value to a set. Create and return a set containing all of the stemmed tokens except those that are in stop_words.ENGLISH_STOP_WORDS. count_words: This instance method returns a word count dictionary that maps tokens to 2-element lists. The first element is the frequency of spam emails that contain the token, the second the frequency of ham emails. Go through every email in the LabeledData object, tokenize it, and then loop through the resulting set of tokens incrementing your dictionary appropriately for every token in the message. Return the dictionary. get_tokens: When classifying a message, we won’t use every token in the message, but rather just a randomly chosen subset. This instance method takes a token vector representing an email and returns a random sample of the tokens in it. You can use random.sample to select a subset. The sample should be of size minimum of max_words and the length of the message. Select your sample from a sorted list of the keys (sorting isn’t important for performance, it’s just for test repeatability here). spam_probability: This instance method takes a string representing an unlabeled, unclassified email and returns an estimate probability that it is spam. Tokenize the message. Return 1 if none of the tokens are keys in word_probs (What does this mean we are assuming about a message when it contains no tokens we’ve seen before?). Initialize variables to hold the logs of our estimated spam and ham probabilities. Get a random sample of the tokens. For each of the tokens in the sample, if the token is in the word_probs dictionary, update the log probability variables (this is where we’re doing the joint conditional probability part of our Naïve Bayes algorithm). Turn the log probabilities into probabilities. Return: spam probability∗0.5 spam probability∗0.5+ham probability∗0.5 What are we assuming about the odds that a new email is spam/ham? What is strange about this formula the way it is written (code it up this way anyway so that your result matches mine)? Why do we have max_words? What happens as the number of tokens we use to calculate the probability gets bigger and bigger? classify: This instance method takes a string representing an email and returns True if our classifier estimates that probability (using spam_probability) that it is spam is at least 50%, False otherwise. predict: This instance method takes an N x 1 data matrix (list) of email messages and returns a list of Booleans representing our predictions as to the spamminess of each message. (Just call classify on every member of the list). main: Make a training dataset by calling LabeledData with no arguments. Make a testing dataset by calling LabeledData with the arguments ‘data/2003/easy_ham’ and ‘data/2003/spam’. Make a classifier with the training data, setting max_words to 25. Use the data matrix in the test data to get a predictions list. Create a confusion matrix using the test data labels and the predictions. Print it. Your confusion matrix should resemble the following: Make sure you remove that line before you turn in your code. Print your test’s accuracy score in the following format: Include this code at the bottom of your module: if __name__ == “__main__”: main() Run your code with several different values of max_words. What happens as max_words starts to get pretty big? Why?

$25.00 View

[SOLVED] Ista 331 hw 5: text mining with cosine similarity

1. Introduction Cosine similarity is a way of measuring the similarity between two vectors by looking at the angle between them. Vectors that point in the same direction have an angle close to 0 between them; vectors that point in opposite directions have an angle close to π radians (or 180◦ ), and vectors that are perpendicular have an angle near π/2 or 90◦ . The measure of similarity is called cosine similarity because instead of measuring the angle directly, we find the cosine of the angle. This can be conveniently calculated using a dot product. Remember that for vectors u, v, the dot product is defined by the formula u · v = mX−1 i=0 uivi where m is the dimension (number of elements in the vectors). It can be proved that the dot product can also be expressed in a geometric form: u · v = kukkvk cos θ where θ is the angle between the vectors. Application: related documents. Our application is going to be to measure how similar documents are to one another. This can be used in practice to categorize documents by subject or style, or even attempt to determine who wrote a particular text.1 1.1. Application to text analysis. We’ll use cosine similarity along with the bag-of-words approach that we used in the na¨ıve Bayes spam filter in HW2 to measure the similarity between pairs of documents. For a simplified version, consider the two sentences: ’I like rich chocolate cake more than I like apple pie.’ ’My dog likes peanut butter more than he likes apple pie.’ Intuitively, these sentences are similar in terms of the language they use, their topic, etc. We can measure this by converting each sentence into a vector counting how many times each word appears: word | apple butter chocolate cake dog he I like likes … count1| 1 0 1 1 0 0 2 2 0 … count2| 1 1 0 0 1 1 0 0 2 … This is a “bag of words” because the only thing we consider is how many times each word appears in the sentence. 1Since writing style has to do with much more than just word choice, usually cosine similarity is just a part of a larger strategy in authorship analysis. 1 2 ISTA 331 HW: TEXT MINING WITH COSINE SIMILARITY To improve this analysis, we can drop very common words (like I, my, he, etc.) because they aren’t very informative, and treat words such as ‘like’ and ‘likes’ as equivalent. By the end, the vectors might look something like this: word | apple butter chocolate cake dog like peanut pie rich count1| 1 0 1 1 0 2 0 1 1 count2| 1 1 0 0 1 2 1 1 0 Then, we could calculate the dot product of these vectors to be 6, and the magnitude of each vector to be 3. Applying the formula above, u · v = 6 = 3 × 3 × cos θ so that cos θ = 2/3. So we would say that the cosine similarity of these two sentences is 2/3. Notice we never actually calculate the angle; this is partly because it doesn’t contain any additional information, and partly because a scale running from −1 to 1 is more intuitive than a scale running from 0 to π. 2 This is the rough idea; there are a couple of refinements we want to use when implementing this in practice. 1.2. Stop words and stemming. Above, we stripped out some common words and also collapsed two similar words, ‘like’ and ‘likes’, to the same category. Common words we want to ignore are called stop words in natural language processing. We’ll use a standard list of them in sklearn.feature extraction. Include the following import: from sklearn.feature_extraction import stop_words Then, the list of words we want to ignore is found in stop words.ENGLISH STOP WORDS. The next step is stemming, which refers to chopping off suffixes so that different forms of the same word get counted together. In the example above, this corresponds to counting ‘likes’ as the same word as ‘like’. The Python package nltk has a few canned solutions for stemming; the one we’ll use is called SnowballStemmer. The way we use it is to initialize an instance like stemmer = SnowballStemmer(“english”) after which we can call the stem method and have it return the reduced word: In [3]: stemmer.stem(“likes”) Out[3]: ’like’ In natural language processing, the results of stemming are often called tokens to distinguish them from the unprocessed words. 1.3. TF-IDF. The last adjustment we need to make to the calculation above is to weight each word by a quantity called inverse document frequency (IDF). This gives extra weight to uncommon words, because otherwise more common words would dominate the calculation (even though uncommon words are often more distinctive, and so more informative). The formula for the IDF of a token t is IDF(t) = 1 + log2  total # of documents # of docs containing t  In the instructions below you’ll be asked to use a slightly different formula. 2You might notice that in our application, the scale actually runs from 0 to 1; none of these vectors have negative entries so the dot product is always ≥ 0. But in some situations, negative similarity may be possible. ISTA 331 HW: TEXT MINING WITH COSINE SIMILARITY 3 The TF part of TF-IDF is term frequency, which is just the number of times a word appears in a document – i.e., just the same count we described above. The metric we’ll use in the end is just the product of these two numbers, T F × IDF. 2. Instructions Create a module named hw5.py. There are six text files on D2L; three are books from a popular science fiction series, and the other three are books from a popular fantasy series. We are going to create a matrix of the similarities of these documents to see if this technique can show which documents belong together. Code up the 9 functions as specified below, and upload your module to the D2L HW4 assignments folder. You will need the following imports (don’t copy and paste these–copying from PDFs can cause formatting issues): import string import pandas as pd import numpy as np from sklearn.feature_extraction import stop_words from nltk.stem import SnowballStemmer 2.1. Testing. Download hw5 test.py and auxiliary testing files and put them in the same folder as your hw5.py module. 2.2. Documentation. Your module must contain a header docstring containing your name, your section leader’s name, the date, ISTA 331 Hw5, and a brief summary of the module. Each function must contain a docstring. Each docstring should include a description of the function’s purpose, the name, type, and purpose of each parameter, and the type and meaning of the function’s return value. 2.3. Grading. Your module will be graded on correctness, documentation, and coding style. Code should be clear and concise. You will only lose style points if your code is a real mess. Include inline comments to explain tricky lines and summarize sections of code. 2.4. Collaboration. Collaboration is allowed. You are responsible for your learning. Depending too much on others will hurt you on the tests. “Helping” others too much harms them in reality. Cite any sources/collaborators in your header docstring. Leaving this out is dishonest. 2.5. Resources. • https://textminingonline.com/dive-into-nltk-part-iv-stemming-and-lemmatization • https://stackoverflow.com/questions/10554052/what-are-the-major-differences-and-benefits-of-porter-and-lancaster-stemming-alg • https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.cosine_ similarity.html • https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics.pairwise • https://pypi.python.org/pypi/editdistance • https://docs.python.org/3/library/stdtypes.html#mapping-types-dict • https://docs.python.org/3/library/stdtypes.html#string-methods 4 ISTA 331 HW: TEXT MINING WITH COSINE SIMILARITY 3. Function specifications Define the following functions: • dot product: takes two word vectors in the form of dictionaries and returns their dot products. (We only have to consider the words that exist in both dictionaries – why?) • magnitude: takes a single word vector in the form of a dictionary and returns its magnitude. • cosine similarity: takes two word vectors in the form of dictionaries and returns their cosine similarity (use the previous two functions). • get text: This function takes the filename of a text file and returns a single string containing the cleaned-up contents of the file. Here’s what we want to do to clean it: – remove every occurrence of the string “n’t”; – remove every occurrence of a character from string.punctuation; – make everything lowercase; and, – get rid of all characters representing digits. • vectorize: this function takes a filename, a stopwords list, and a stemmer, and returns a dictionary representing a wordcount vector mapping cleaned words from the file to wordcounts. In other words, you should get the text from the file, break it into a list of tokens, pass each token through the stemmer, and if it is not a stop word, add it to or increment its count in the vector. Make sure that the empty string is not a key in the vector. • get doc freqs: this function takes a list of wordcount vectors (i.e. dictionaries) and returns a dictionary that maps each key from all of the vectors to the number of vectors that word appears in. • tfidf: this function takes a list of wordcount vectors (i.e. dictionaries) and replaces the word counts with TF-IDF measurements (see above). Use the formula IDF(t) = 1 + log2  scale × total # of documents # of documents containing t  The scale parameter is not standard; it is included here to compensate for a low number of documents. Use scale 1 if there are 100 or more documents; otherwise, scale = 100/(# of documents). Then, define the following function to test our similarity measure and compare our example documents: • get similarity matrix: this function takes a list of filenames, a stopwords list, and a stemmer, and returns a DataFrame containing the matrix of document similarities. Your DataFrame should use the filenames for both its index and its column names, and the [doc a, doc b] entry of the matrix should be the cosine similarity between the TF-IDF vectors of documents doc a and doc b. Note: you can solve this by looping over the full row and column indices and calculating the similarity for each pair of indices, but this is somewhat inefficient. We know in advance that: – the matrix is symmetric (i.e. similarity(A, B) = similarity(B, A)) – the diagonal entries are all 1 since the cosine similarity of any vector with itself is 1 This means that you can calculate the upper triangle of the matrix (Ai,j where j > i) and then copy those values to the lower triangle, and fill the diagonal with 1s. Doing this will skip unnecessary calculations and make your function run about twice as fast, which will save you testing time because the documents we’re using are pretty big. ISTA 331 HW: TEXT MINING WITH COSINE SIMILARITY 5 Finally, define a main function that uses get similarity matrix to compute the similarity matrix for the five sample documents, 00001.txt, 00002.txt, etc. These six documents contain the text of three books each from two popular science fiction/fantasy series. Based only on the similarity matrix, can you group the books into their respective series?

$25.00 View

[SOLVED] Web222 assignment 5 overview this assignment is designed to have you practice building more complex html and css layouts.

 This assignment is designed to have you practice building more complex HTML and CSS layouts.  We will continue to iterate on your previous Assignment 4 web store’s static and dynamic web content. You are asked to update the design of your fictional online store.  This time, instead of displaying your products in an HTML table, you will create visual product “cards” that show a picture, name, description, and price. You must do all the work for this assignment on your own.  You may consult your notes, use the web for inspiration, but you should not copy code directly from other sites, or other students.  If you need help, ask your professor. Cards Cards on the web, much like trading or playing cards, are rectangular areas that allow you to visually present a lot of related data.  We often see them used in online stores, social media, and anywhere that we want to mix images, titles, and text in a rectangle.  Here are some real-world examples from Rothy’s, Amazon, and airbnb:   There are lots of resources you can use to learn more about creating a card, for example: Update Your Store to Use Cards Modify your solution to Assignment 4 in order to replace the HTML table with rows of cards.  To do this, you should follow these steps:   Use CSS classes on your card’s elements in order to apply colours, fonts, margins, padding, borders, etc. until you have something that you like.  Here’s an example, which uses rounded corners, a subtle shadow, different font sizes, and a large photo at the top.        Make sure you optimize the images so they are not too big to download (i.e., don’t use a 5000×6000 image in a card that uses 400×200). You can use https://squoosh.app/ for images that you download.  Or you can also use a trick with https://unsplash.com/ images to resize them automatically via the URL.  For example, the bike above is https://unsplash.com/photos/tG36rvCeqng.  Here’s the full-sized image https://images.unsplash.com/photo-1485965120184-e220f721d03e (it’s 3.8M in size, and 4440×2960).  We can reduce that image by adding some parameters to the image URL: ?auto=format&fit=crop&w=750&q=80 to crop and resize it to 750 pixels wide, and reduce the quality a bit to 80%, like this: https://images.unsplash.com/photo-1485965120184-e220f721d03e?auto=format&fit=crop&w=750&q=80 See https://unsplash.com/documentation#dynamically-resizable-images for more details.   function createProductCard(product) {// Create a to hold the cardconst card = document.createElement(‘div’);// Add the .card class to the card.classList.add(“card”); // Create a product image, use the .card-image classconst productImage = document.createElement(‘img’);productImage.src = product.imageUrl;productImage.classList.add(“card-image”);card.appendChild(productImage); // … rest of your card building code here // Return the card’s element to the callerreturn card;}  Use your copy of the website starter project in the assignment-4 ZIP file.  Install all dependencies by running the following command in the root of the assignment (e.g., in the same directory as package.json): npm install Your code should all be placed in the src/ directory.  You can start a local web server to test your code in a browser by running the following command: npm start This will start a server on http://localhost:8080, which you can open in your web browser To stop the server, use CTRL + C  When you are finished, run the following command to create your submission ZIP file: npm run prepare-submission This will generate submission.zip, which you can hand in on Blackboard. 

$25.00 View

[SOLVED] Web222 assignment 4 overview this assignment is designed to have you practice working with html and the dom

 This assignment is designed to have you practice working with HTML and the DOM in order to create both static and dynamic web content. You are asked to prototype a fictional online store.  Your store will sell several different product categories, and many products in those categories.  Because a store’s products and categories will change frequently, we often separate our data from its UI representation.  This allows us to quickly make changes and have the store’s web site always use the most current inventory information. NOTE: in a real e-commerce web store, our data would be stored in a database.  We will simulate working with a database by using JavaScript Objects and Arrays.  You need to decide on the following details for your store:       Each category needs two things:    Each product needs the following things:       Your category and product data will go in `src/categories.js` and `src/products.js` respectively.  See these files for technical details about how to code your data. Take some time now to enter all of your store’s data.  Your store’s HTML file is located in `src/index.html`.  A brief HTML skeleton has been created, and you are asked to fill in the rest using your information above. Some of your site will be static (i.e., coded in HTML directly in index.html) and never change.  Other parts of the site will be dynamic (i.e., created using DOM API calls at run-time) and will update in response to various events and user actions. Here is a basic wireframe of what your site needs to include, and which parts are static or dynamic.  NOTE: don’t worry too much about how it looks.  Focus on the structure and functionality.   Store NameStore Slogan/Description… Category1        Category2         Category3  Category1 Name      All of your store’s dynamic content will be written in JavaScript in the `src/app.js` file.  Here is a list of the tasks you need to complete:  In your solution, you will likely require all of the following:  You are encouraged to use what you learned in the first 3 assignments and write proper functions and semantic HTML.  Use the website starter project in the assignment ZIP file.  Install all dependencies by running the following command in the root of the assignment (e.g., in the same directory as package.json): npm install Your code should all be placed in the src/ directory.  You can start a local web server to test your code in a browser by running the following command: npm start This will start a server on http://localhost:8080, which you can open in your web browser To stop the server, use CTRL + C  When you are finished, run the following command to create your submission ZIP file: npm run prepare-submission This will generate submission.zip, which you can hand in on Blackboard. 

$25.00 View

[SOLVED] Web222 assignment 3 objective: practice writing html markup, using media elements, writing for the web, and using open archives.

Objective: Practice writing HTML Markup, Using Media Elements, Writing for the Web, and Using Open Archives.  You are asked to create a small wildlife educational website.   You will pick a species of animal, bird, insect, fish, etc. and research this species online.  You will then create a multimedia website that uses resources about your chosen species (e.g., images, audio, and video) from open web archives. The web is full of both proprietary and open-licensed resources.  The former cannot be reused by you: you can’t take an image or logo from someone else’s site and use it on your own.  This is a copyright violation.  However, there are also many open resources that you can copy and reuse.  Learning how to find and use these correctly is important when building your own web content.  Step 1. Choose a Species Pick a species to research from those listed in iNaturalist, see: https://www.inaturalist.org/observations?place_id=any&view=species It can be a plant, animal, insect, bird, etc.  Ideally you should choose a species that lives near you, but you are free to also choose something else that you find interesting.  You must work on your own species (i.e., you can’t partner with other students in the course).  Given the number of natural species in the world, it would be surprising if two students chose the same one. Step 2. Research on iNaturalist.org Research your chosen species using iNaturalist’s website.  For example, if you were interested in the Red-bellied Woodpecker, you would begin with the following page: https://www.inaturalist.org/taxa/18205-Melanerpes-carolinus Learn as much as you can about the species.  Take notes to help you with the creation of your website.  You may NOT copy the text word-for-word, only use it as background material.    Step 3. Research on 3 Other Platforms Conduct a similar search for other sources of information about your chosen species.  Find 3 other web resources to use in your research.  Try to find reputable sources of information.  Take notes as you do your research on these other sites and keep track of all the sites/URLs you use.  You will need to properly cite these in your about.html page (see below). Step 4. Write a Research Summary Write a 750-to-1000 word summary of your research.  Your goal is to educate a non-scientific audience about your chosen species.  Give them an overview and summary.  You should define any terms you use, and help your reader understand the concepts you discuss. You may NOT copy/paste any text, all words must be your own. Step 5. Convert to Markup Convert your text to HTML5 markup.  Make use of any and all appropriate HTML elements https://developer.mozilla.org/en-US/docs/Web/HTML/Element.  For example, if you use lists or acronyms, quotes or technical terms, dates or definitions, etc. you should make use of the associated HTML5 elements.  You will be graded on the appropriate use of HTML5 elements—you can’t make everything a or . In your final markup, you should try to use most of the following HTML5 semantic elements (see https://developer.mozilla.org/en-US/docs/Web/HTML/Element):  You will be marked on your knowledge and use of these elements, and how well you have used them to markup your text.  You may NOT submit a series of plain text paragraphs with no other elements.  Spend some time choosing and implementing your markup. Step 6. Add Media Find supporting media resources to help educate the reader on your topic.  Media helps tell a story and is one of the secret powers that the web has over other print media. Here’s an example web page from the Globe and Mail newspaper that uses a mix of text and media well: https://www.theglobeandmail.com/canada/article-the-last-lighthouse-keeper-why-a-nova-scotian-couple-refused-to-leave/ In this site you see all of the following HTML5 and media being used:  Your site doesn’t need to be this elaborate, but hopefully you get some ideas to help guide your use of text and media. You can use any open licensed media resource that allows reuse, but may not use copyright materials.  How do you know if something is copyright?  Everything is copyright!  Unless you are told you can reuse something that you find, assume that you can’t.  Open licensed materials will be marked as such. Here are some links to help you find open licensed media:  You are asked to include the following open licensed resources on your page:  Use appropriate HTML to include these resources in your site along with the text you have written.  You may link to external URLs where applicable (i.e., you don’t have to download and use resources if they are publicly hosted).  Make sure you do the following:  Step 7. Add A Basic Stylesheet This assignment is not about the page’s style (fonts, colours, etc).  We will focus on style when we look at CSS later in the course. However, you are encouraged to use one of the various “class-less” CSS stylesheets described here: https://css-tricks.com/no-class-css-frameworks/  These stylesheets can be included in the of your document, for example:  Try experimenting with some of these stylesheets to find one that makes your page look good to you.  Use the website starter project in the assignment ZIP file.  Install all dependencies by running the following command in the root of the assignment (e.g., in the same directory as package.json): npm install Your code should all be placed in the src/ directory.  You will find 3 HTML files there now, which should be updated by you as follows:    NOTE: you are welcome to create other pages if you need them.  Just remember to link all of your pages together.  You can start a local web server to test your code in a browser by running the following command: npm start This will start a server on http://localhost:3000, which you can open in your web browser To stop the server, use CTRL + C  When you are finished, run the following command to create your submission ZIP file: npm run prepare-submission This will generate submission.zip, which you can hand in on Blackboard.

$25.00 View

[SOLVED] Oop345 workshop #4: factory assembly line the purpose of this workshop is to put your c++ object oriented skills to practice by developing a simulation

5/5 - (1 vote) The purpose of this workshop is to put your C++ Object Oriented skills to practice by developing a simulation of an assembly line with any number of stations. A line with 3 Stations is illustrated in the figure below.![Assembly Line](assemblyline.jpg)The assembly line in your solution consists of a set of workstations each of which holds a set of stock items, which are specific to the station. A line manager moves customer orders along the line filling the orders at each station, as requested. Each customer order consists of a list of items that need to be filled. Each station processes a queue of orders by filling the next order in the queue if that order requests the station’s item and that item is in stock. The line manager keeps moving the customer orders from station to station until all orders have been processed. Any station that has used all the items in stock cannot fill any more orders. At the end of the line orders are either completed or incomplete due to a lack of inventory at one or more stations. The simulator lists the completed orders and those that are incomplete once the line manager has finished processing all orders.The workshop is divided into 3 testers to help guide you through implementation, debugging and execution.  Each tester will focus on different aspects/classes of your solution. For full credit, you must have three submissions (one with each tester); however it is possible to get partial marks by submitting using only some testers.This application is more complex than previous workshops and **will put your debugging skills to full use**.| Submission using tester | Max (%) || ———————– | ——- || #1                      |  20%    || #2                      |  30%    || #3                      |  50%    |## Submission PolicyThe workshop should contain ***only work done by you this term*** or provided by your professor.  Work done in another term (by you or somebody else), or work done by somebody else and not **clearly identified/cited** is considered plagiarism, in violation of the Academic Integrity Policy.Every file that you submit must contain (as a comment) at the top **your name**, **your Seneca email**, **Seneca Student ID** and the **date** when you completed the work.– If the file contains only your work, or work provided to you by your professor, add the following message as a comment at the top of the file:    > I declare that this submission is the result of my own work and I only copied the code that my professor provided to complete my workshops and assignments. This submitted piece of work has not been shared with any other student or 3rd party content provider.– If the file contains work that is not yours (you found it online or somebody provided it to you), **write exactly which parts of the assignment are given to you as help, who gave it to you, or which source you received it from.**  By doing this you will only lose the mark for the parts you got help for, and the person helping you will be clear of any wrong doing.All of your source code, including externally linked variables, should be in the `seneca` namespace. Use class declarations in header files wherever appropriate.## Compiling and Testing Your ProgramAll your code should be compiled using this command on `matrix`:“`bash/usr/local/gcc/10.2.0/bin/g++ -Wall -std=c++17 -g -o ws file1.cpp file2.cpp …“`– `-Wall`: compiler will report all warnings– `-std=c++17`: the code will be compiled using the C++17 standard– `-g`: the executable file will contain debugging symbols, allowing *valgrind* to create better reports– `-o ws`: the compiled application will be named `ws`After compiling and testing your code, run your program as following to check for possible memory leaks (assuming your executable name is `ws`):“`bashvalgrind –show-error-list=yes –leak-check=full –show-leak-kinds=all –track-origins=yes ws“`– `–show-error-list=yes`: show the list of detected errors– `–leak-check=full`: check for all types of memory problems– `–show-leak-kinds=all`: show all types of memory leaks identified (enabled by the previous flag)– `–track-origins=yes`: tracks the origin of uninitialized values (`g++` must use `-g` flag for compilation, so the information displayed here is meaningful).To check the output, use a program that can compare text files.  Search online for such a program for your platform, or use `diff` available on `matrix`.# Factory Assembly Line## Tester ModulesTo test your application, you are provided with 3 different tester modules (do not modify any of them them):– `tester_1`: will focus on the functionality offered by the `Utilities` and `Station` modules.– `tester_2`: will focus on the functionality offered by the `CustomerOrder` module, and make use of the functionality offered by the previous modules.– `tester_3`: will focus on the functionality offered by the `Workstation` and `LineManager` modules, and make use of the functionality offered by the previous modules.  **Passing the tests performed by `tester_3` will require very strong debugging skills** and a deep understanding of how the assembly line works; discuss at the lab with your professor.For each tester, the expected sample output is provided. Look in each file for the command line necessary to start the application and the expected output.## `Utilities` ModuleThe `Utilities` module supports the parsing of input files, which contain information used to setup and configure the assembly line.Parsing string data from input files into tokens is performed uniformly for all objects within the simulation system.  The `Utilities` type provides the basic functionality required for all objects in the system.The `Utilities` class has the following structure:### Instance Variable–  `m_widthField` — specifies the length of the token extracted; used for display purposes; default value is `1`.### Class Variable–  `m_delimiter` — separates the tokens in any given `std::string` object. All `Utilities` objects in the system **share the same delimiter**.### Member Functions– `void setFieldWidth(size_t newWidth)` — sets the field width of the current object to the value of parameter `newWidth`– `size_t getFieldWidth() const` — returns the field width of the current object– `std::string extractToken(const std::string& str, size_t& next_pos, bool& more)` — extracts a token from string `str` referred to by the first parameter.    This function:    –  uses the delimiter to extract the next token from `str` starting at position `next_pos`.        –  If successful, return a copy of the extracted token found (without spaces at the beginning/end), update `next_pos` with the position of the next token, and set `more` to `true` (`false` otherwise).    –  reports an exception if a delimiter is found at `next_pos`.    –  updates the current object’s `m_widthField` data member if its current value is less than the size of the token extracted.    **Note:**  in this application, `str` represents a single line that has been read from an input file.### Class Functions–  `static void setDelimiter(char newDelimiter)` — sets the delimiter for this class to the character received–  `static char getDelimiter()` — returns the delimiter for this class.## `Station` ModuleThe `Station` module manages information about a station on the assembly line, which holds a specific item and fills customer orders.The `Station` class has the following structure:### Instance Variables– the id of the station (integer)– the name of the item handled by the station (string)– the description of the station (string)– the next serial number to be assigned to an item at this station (non-negative integer)– the number of items currently in stock (non-negative integer)### Class Variables– `m_widthField` — the maximum number of characters required to print to the screen the *item name* for any object of type `Station`.  Initial value is 0.– `id_generator` — a variable used to generate IDs for new instances of type `Station`. Every time a new instance is created, the current value of the `id_generator` is stored in that instance, and `id_generator` is incremented.  Initial value is 0.### Public Functions–  custom 1-argument constructor    – upon instantiation, a `Station` object receives a reference to an unmodifiable `std::string`.  This string contains a single record (one line) that has been retrieved from the input file specified by the user.    – this constructor uses a `Utilities` object (defined locally) to extract each token from the record and populates the `Station` object accordingly.    – this constructor assumes that the string contains 4 fields separated by the delimiter, in the following order:        – name of the item        – starting serial number        – quantity in stock        – description    – the token delimiter is a single character, specified by the client and previously stored into the `Utilities` class of objects.    – this constructor extracts *name*, *starting serial number*, and *quantity* from the string first    – before extracting *description*, it updates `Station::m_widthField` to the maximum value of `Station::m_widthField` and `Utilities::m_widthField`.        – **Note:**  the `display(…)` member function uses this field width to align the output across all the records retrieved from the file.–  `const std::string& getItemName() const` – returns the name of the current `Station` object–  `size_t getNextSerialNumber()` – returns the next serial number to be used on the assembly line and increments `m_serialNumber`–  `size_t getQuantity() const` – returns the remaining quantity of items in the `Station` object–  `void updateQuantity()` – subtracts 1 from the available quantity; should not drop below 0.–  `void display(std::ostream& os, bool full) const` — inserts information about the current object into stream `os`.    – if the second parameter is `false`, this function inserts only the ID, name, and serial number in the format: `ID | NAME | SERIAL | `    – if the second parameter if `true`, this function inserts the information in the following format: `ID | NAME | SERIAL | QUANTITY | DESCRIPTION`    – the `ID` field uses 3 characters, the `NAME` field uses `m_widthField` characters, the `QUANTITY` field uses 4 characters, the `SERIAL` field uses 6 characters; the `DESCRIPTION` has no formatting options (see the sample output for other formatting options)    – this function terminates the printed message with an endline## `CustomerOrder` ModuleThe `CustomerOrder` module contains all the functionality for processing customer orders as they move from `Station` to `Station` along the assembly line. The `Station` where a given order currently rests fills a request for one item of that station, if there is any such request.A `CustomerOrder` object manages a single order on the assembly line and contains the following information:The `CustomerOrder` class has the following structure:### Item Definition“`cppstruct Item{std::string m_itemName;size_t m_serialNumber{0};bool m_isFilled{false};Item(const std::string& src) : m_itemName(src) {};};“`### Instance Variables– `std::string m_name` – the name of the customer (e.g., John, Sara, etc)– `std::string m_product` – the name of the product being assembled (e.g., Desktop, Laptop, etc)– `size_t m_cntItem` – a count of the number of items in the customer’s order– `Item** m_lstItem` – a dynamically allocated array of pointers. Each element of the array points to a dynamically allocated object of type `Item` (see below). **This is the resource** that your class must manage.### Class Variable– `static size_t m_widthField` – the maximum width of a field, used for display purposes### Member Functions– default constructor– a custom 1-argument constructor that takes a reference to an unmodifiable string.  This constructor uses a local `Utilities` object to extract the tokens from the string and populate the current instance.  The fields in the string are (separated by a delimiter):    – Customer Name    – Order Name    – the list of items making up the order (at least one item)  After finishing extraction, this constructor updates `CustomerOrder::m_widthField` if the current value is smaller than the value stored in `Utilities::m_widthField`.– a `CustomerOrder` object should not allow copy operations.  The copy constructor should throw an exception if called and the copy `operator=` should be deleted.– a move constructor. This constructor should “promise” that it doesn’t throw exceptions. Use the `noexcept` keyword in the declaration and the definition.– a move assignment operator. This operator should “promise” that it doesn’t throw exceptions. Use the `noexcept` keyword in the declaration and the definition.– a destructor– `bool isOrderFilled() const` – returns `true` if all the items in the order have been filled; `false` otherwise– `bool isItemFilled(const std::string& itemName) const` – returns `true` if all items specified by `itemName` have been filled. If the item doesn’t exist in the order, this query returns `true`.– `void fillItem(Station& station, std::ostream& os)` – this modifier fills **one** item in the current order that the `Station` specified in the first parameter handles.    – if the order doesn’t contain the item handled, this function does nothing    – if the order contains items handled, and the `Station`’s inventory contains at least one item, this function fills the order with one single item. It subtracts 1 from the inventory and updates `Item::m_serialNumber` and `Item::m_isFilled`. It also prints the message `    Filled NAME, PRODUCT [ITEM_NAME]`.    – if the order contains items handled but unfilled, and the inventory is empty, this function prints the message `    Unable to fill NAME, PRODUCT [ITEM_NAME]`.    – all messages printed are terminated by an endline– `void display(std::ostream& os) const` – this query displays the state of the current object in the format (see the sample output for details)    “`    CUSTOMER_NAME – PRODUCT    [SERIAL] ITEM_NAME – STATUS    [SERIAL] ITEM_NAME – STATUS    …    “`    – `SERIAL` – a field of width 6    – `ITEM_NAME` – a field of size `m_widthField`    – `STATUS` is either `FILLED` or `TO BE FILLED`    – you must use IO manipulators to format this output.## `Workstation` ModuleThe `LineManager` module first configures the assembly line and then moves `CustomerOrders` along it (from start to finish).  The `LineManager` object configures the `Workstation` objects identified by the user, and moves orders along the line one step at a time. A `Workstation` is a `Station` that the `LineManager` has activated on the user’s request. At each step, every `Workstation` fills one item in a `Customer Order`, if possible. The manager moves orders from station to station. Once an order has reached the end of the line, it is either complete or incomplete. An order is incomplete if one or more stations had an insufficient number of items in stock to cover that order’s requests.The `Workstation` module consists of three double-ended queues of `CustomerOrder` and the `Workstation` class. The queues (global variables) hold the orders at either end of the assembly line:– `g_pending` holds the orders to be placed onto the assembly line at the first station.– `g_completed` holds the orders that have been removed from the last station and have been completely filled.– `g_incomplete` holds the orders that have been removed from the last station and could not be filled completely.Each queue is accessible outside this module’s translation unit.The `Workstation` class defines the structure of an active station on the assembly line and contains all the functionality for filling customer orders with station items.  Each `Workstation` is-a-kind-of `Station`. A `Workstation` object manages order processing for a single `Item` on the assembly line. Since a `Workstation` object represents a single location on the assembly line for filling customer orders with items, the object cannot be copied or moved. Make sure that this capability is deleted in your definition of the `Workstation` class.The `Workstation` class includes the following additional information:### Instance Variables– `m_orders` – is a double-ended-queue with `CustomerOrders` entering the back and exiting the front.  These are orders that have been placed on this station to receive service (or already received service).– `m_pNextStation` – a pointer to the next `Workstation` on the assembly line.### Member Functions– a custom 1-argument constructor — receives a reference to an unmodifiable reference to `std::string` and passes it to the `Station` base class.– `void fill(std::ostream& os)` – this modifier fills the order at the front of the queue if there are `CustomerOrders` in the queue; otherwise, does nothing.– `bool attemptToMoveOrder()` – attempts to move the order order at the front of the queue to the next station in the assembly line:    – if the order requires no more service at this station or cannot be filled (not enough inventory), move it to the next station; otherwise do nothing        – if there is no next station in the assembly line, then the order is moved into `g_completed` or `g_incomplete` queue    – if an order has been moved, return `true`; `false` otherwise.– `void setNextStation(Workstation* station)` – this modifier stores the address of the referenced `Workstation` object in the pointer to the `m_pNextStation`. Parameter defaults to `nullptr`.– `Workstation* getNextStation() const` – this query returns the address of next `Workstation`– `void display(std::ostream& os) const` – this query inserts the name of the `Item` for which the current object is responsible into stream `os` following the format: `ITEM_NAME –> NEXT_ITEM_NAME`    – if the current object is the last `Workstation` in the assembly line this query inserts: `ITEM_NAME –> End of Line`.    – in either case, the message is terminated with ` `– `Workstation& operator+=(CustomerOrder&& newOrder)` – moves the `CustomerOrder` referenced in parameter `newOrder` to the back of the queue.## `LineManager` ModuleThe `LineManager` class manages an assembly line of active stations and contains the following information:***Instance Variables***– `std::vector m_activeLine` – the collection of workstations for the current assembly line.– `size_t m_cntCustomerOrder` – the total number of `CustomerOrder` objects– `Workstation* m_firstStation` – points to the first active station on the current line***Member Functions***– `LineManager(const std::string& file, const std::vector& stations)` – this constructor receives the name of the file that identifies the active stations on the assembly line (example:  `AssemblyLine.txt`) and the collection of workstations available for configuring the assembly line.  The file contains the linkage between workstation pairs. The format of each record in the file is `WORKSTATION|NEXT_WORKSTATION`. The records themselves are not in any particular order.  This function stores the workstations in the order received from the file in the `m_activeLine` instance variable. It loads the contents of the file, stores the address of the next workstation in each element of the collection, identifies the first station in the assembly line and stores its address in the `m_firstStation` attribute. This function also updates the attribute that holds the total number of orders in the `g_pending` queue.  If something goes wrong, this constructor reports an error.  **Note**: to receive full marks, use STL algorithms throughout this function, except for iterating through the file records (one `while` loop); marks will be deducted if you use any of `for`, `while` or `do-while` loops except for iterating through the file records.– `void reorderStations()` – this modifier reorders the workstations present in the instance variable `m_activeLine` (loaded by the constructor) and stores the reordered collection in the same instance variable. The elements in the reordered collection start with the first station, proceeds to the next, and so forth until the end of the line.– `bool run(std::ostream& os)` –  this modifier performs **one** iteration of operations on all of the workstations in the current assembly line by doing the following:    – keeps track of the current iteration number (use a local variable)    – inserts into stream `os` the iteration number (how many times this function has been called by the client) in the format `Line Manager Iteration: COUNT`    – moves the order at the front of the `g_pending` queue to the `m_firstStation` and remove it from the queue. This function moves only one order to the line on a single iteration.    – for each station on the line, executes one fill operation    – for each station on the line, attempts to move an order down the line    – return `true` if all customer orders have been filled or cannot be filled, otherwise returns `false`.– `void display(std::ostream& os) const` — this query displays all active stations on the assembly line in their current order## SubmissionCreate a **text** file named `reflect.txt`.  Add any comments you wish to make.For `tester_1`, the upload to matrix the files:– `Utilities.h`– `Utilities.cpp`– `Station.h`– `Station.cpp`– `reflect.txt`From a command prompt, execute the following command:“`bash~profname.proflastname/submit 345_w4_tester_1“`For `tester_2`, the upload to matrix the files:– `Utilities.h`– `Utilities.cpp`– `Station.h`– `Station.cpp`– `CustomerOrder.h`– `CustomerOrder.cpp`– `reflect.txt`From a command prompt, execute the following command:“`bash~profname.proflastname/submit 345_w4_tester_2“`For `tester_3`, the upload to matrix the files:– `Utilities.h`– `Utilities.cpp`– `Station.h`– `Station.cpp`– `CustomerOrder.h`– `CustomerOrder.cpp`– `Workstation.h`– `Workstation.cpp`– `LineManager.h`– `LineManager.cpp`– `reflect.txt`From a command prompt, execute the following command:“`bash~profname.proflastname/submit 345_w4_tester_3“`> [!WARNING]> Please note that a successful submission does not guarantee full credit for this workshop. If the professor is not satisfied with your implementation, your professor may ask you to resubmit. Resubmissions will attract a penalty.

$25.00 View

[SOLVED] Oop345 workshop #3: multimedia management application in this workshop, you will create an application to manage movies and books.

In this workshop, you will create an application to manage movies and books. The information about different media items will be loaded in objects from files, and the object will be managed by collection types using STL collections and algorithms.## Submission PolicyThe workshop should contain ***only work done by you this term*** or provided by your professor. Work done in another term (by you or somebody else), or work done by somebody else and not **clearly identified/cited** is considered plagiarism, in violation of the Academic Integrity Policy.Every file that you submit must contain (as a comment) at the top **your name**, **your Seneca email**, **Seneca Student ID** and the **date** when you completed the work.– If the file contains only your work, or work provided to you by your professor, add the following message as a comment at the top of the file:> I declare that this submission is the result of my own work and I only copied the code that my professor provided to complete my workshops and assignments. This submitted piece of work has not been shared with any other student or 3rd party content provider.– If the file contains work that is not yours (you found it online or somebody provided it to you), **write exactly which parts of the assignment are given to you as help, who gave it to you, or which source you received it from.** By doing this you will only lose the mark for the parts you got help for, and the person helping you will be clear of any wrong doing. ## Compiling and Testing Your ProgramAll your code should be compiled using this command on `matrix`:“`bash /usr/local/gcc/10.2.0/bin/g++ -Wall -std=c++17 -g -o ws file1.cpp file2.cpp … “`– `-Wall`: compiler will report all warnings – `-std=c++17`: the code will be compiled using the C++17 standard – `-g`: the executable file will contain debugging symbols, allowing *valgrind* to create better reports – `-o ws`: the compiled application will be named `ws`After compiling and testing your code, run your program as following to check for possible memory leaks (assuming your executable name is `ws`):“`bash valgrind –show-error-list=yes –leak-check=full –show-leak-kinds=all –track-origins=yes ws “`– `–show-error-list=yes`: show the list of detected errors – `–leak-check=full`: check for all types of memory problems – `–show-leak-kinds=all`: show all types of memory leaks identified (enabled by the previous flag) – `–track-origins=yes`: tracks the origin of uninitialized values (`g++` must use `-g` flag for compilation, so the information displayed here is meaningful).To check the output, use a program that can compare text files. Search online for such a program for your platform, or use `diff` available on `matrix`. ## Multimedia Management SystemIn this application you code classes to load from files information about books, movies, and TV shows, create objects with the loaded information and manage those objects using STL containers and algorithms. If something goes wrong during the execution, you will report it to clients using exceptions.Put all the global variables, global functions/operator overloads, and types inside the `seneca` namespace and include the necessary guards in each header file. ### `settings` ModuleThe `settings` will contain functionality regarding configuration of the application. Design and code a structure named `Settings`; in the header, *declare* a global variable of this type named `g_settings` and define it in the implementation file.For simplicity reasons, this type will contain only public data-members and no member-functions.#### Public Members– `m_maxSummaryWidth` an integer in 2 bytes that will store the maximum width of text then printing the summary of a media item. By default, the width is 80. – `m_tableView` as a Boolean attribute; when `true`, print to screen the information about the media items formatted as a table. By default, this attribute is `false`.  ### `mediaItem` module (supplied)This module contains information about a generic multimedia item (a book, movie, or TV show).**Do not modify this module!** Study the code supplied and make sure you understand it.  ### `book` ModuleDesign and code a class named `Book` derived from `MediaItem` that can store the following information (for each attribute, chose any type that you think is appropriate–you must be able to justify the decisions you have made):– `m_author`: the author of the book – **title** (inherited) – `m_country`: the country of publication – **the year of publication** (inherited) – `m_price`: the price of the book – **the summary** (inherited): a short description of the book#### Private Members– add any constructors that are necessary for your designThis class will not offer any public constructors.#### Public Members– `void display(std::ostream& out) const override`: override this function to print the information about a single book. Use the following implementation:“`cpp void display(std::ostream& out) const { if (g_settings.m_tableView) { out

$25.00 View

[SOLVED] Oop345 workshop #2: rpg game in this workshop, you will create an application that implements a rudimentary structure

In this workshop, you will create an application that implements a rudimentary structure of an RPG game. The game will feature multiple classes of characters, each one capable of using weapons and special abilities in their quest to defeat their enemies. Each player will be part of a team and can join a guild for extra bonuses. ## Compiling and Testing Your ProgramAll your code should be compiled using this command on `matrix`:“`bash /usr/local/gcc/10.2.0/bin/g++ -Wall -std=c++17 -g -o ws file1.cpp file2.cpp … “`– `-Wall`: compiler will report all warnings – `-std=c++17`: the code will be compiled using the C++17 standard – `-g`: the executable file will contain debugging symbols, allowing *valgrind* to create better reports – `-o ws`: the compiled application will be named `ws`After compiling and testing your code, run your program as following to check for possible memory leaks (assuming your executable name is `ws`):“`bash valgrind –show-error-list=yes –leak-check=full –show-leak-kinds=all –track-origins=yes ws “`– `–show-error-list=yes`: show the list of detected errors – `–leak-check=full`: check for all types of memory problems – `–show-leak-kinds=all`: show all types of memory leaks identified (enabled by the previous flag) – `–track-origins=yes`: tracks the origin of uninitialized values (`g++` must use `-g` flag for compilation, so the information displayed here is meaningful).To check the output, use a program that can compare text files. Search online for such a program for your platform, or use `diff` available on `matrix`. ## DictionaryThis application will implement three different classes of characters that the player can choose at the begining of the game and will simulate the fight between two players.Put all the global variables, global functions/operator overloads, and types inside the `seneca` namespace. ### `tester_1` Module (supplied)**Do not modify this module!** Study the code supplied and make sure you understand it. ### `abilities` Module (supplied)This module contains the logic of some special abilitites that a character can have in the game. For simplicity reasons, the logic is kept trivial.**Do not modify this module!** Study the code supplied and make sure you understand it. ### `weapons` Module (supplied)This module contains the logic the weapons that a character can use in battle. For simplicity reasons, the logic is kept trivial.**Do not modify this module!** Study the code supplied and make sure you understand it. ### `health` Module (supplied)This module contains the logic related to handling the health of a character when taking damage. For simplicity reason, the logic is kept trivial.A character can have health stored as a single number and healing or taking damage will be implemented as the arithmetic operations; or it can have more complex behaviour:– *infinite health*, where the health is never changed regardless of damage taken. – *super health*, where the damage is halved.A programmer could expand on this and implement more complex behaviour in future versions of the game.**Do not modify this module!** Study the code supplied and make sure you understand it. ### `character` Module (supplied)This module contain the interface that every character must implement. This class sits at the top of the *character* hierarchy. Look in the `character.h` file to see a description of each function available.![The Character Hierarchy](characters.png)**Do not modify this module!** Study the code supplied and make sure you understand it.   ### `characterTpl` ModuleImplement a templated class named `CharacterTpl`, derived from the `Character` (supplied). The `CharacterTpl` will add health and health manipulation functions to a character. The health can have a fundamental numerical type (i.e., `int`, `double`, etc.) for simple health operations, or a custom type (e.g., `seneca::InfinitHealth`, `seneca::SuperHealth`) allowing programmer to implement complex behaviour in handling damage or recovery.Template parameters:– `T`: the type of the object storing the health.#### Private Members– `m_healthMax`: an integer representing the maximum health this character can have – `m_health`: an object of type `T` representing the current health of the character. When this value gets to 0, the character died.#### Public Members– a custom constructor that receives the name of the character and the maximum health; initializes the current instance with the values of the paameters and set the current health to maximum. – `void takeDamage(int dmg) override`: reduces the current health by the value of the parameter. **In this design, it is assumed that type `T` supports `-=` operation.** After taking damage, if the character died, print:“`txt [NAME] has been defeated! “`If the character is still alive, print:“`txt [NAME] took [DAMAGE] damage, [HEALTH] health remaining. “`– `int getHealth() const override`: returns current health. **In this design, it is assumed that `T` supports conversion to `int` using `static_cast`.** – `int getHealthMax() const override`: returns current maximum health. – `void setHealth(int health) override`: sets the current health to the value received as parameter. **In this design, it is assumed that `T` has assignment to `int` operator overloaded.** – `void setHealthMax(int health) override`: sets the maximum health and current health to the value of the parameter. ### `barbarian` ModuleImplement a templated class named `Barbarian`, derived from the `CharacterTpL`. The `Barbarian` class is a concrete class implementing barbarian-specific logc.Template parameters:– `T`: the type of the object storing the health. This type is passed to the base class. – `Ability_t`: the type implementing the special abilities that this barbarian has (e.g., `Fireball`, `Healing`, etc.). – `Weapon_t`: the type implementing the weapons the barbarian will handle (e.g., `Sword`, `Bow`, etc.).#### Private Members– `m_baseDefense`: a number representing the basic defense of this character – `m_baseAttack`: a number representing the basic attack power of this character – `m_ability`: and object of type `Ability_t` representing the special ability of this character. – `m_weapon`: an array of two objects of type `Weapon_t`, representing the two weapons the character can use in battle.#### Public Members– `Barbarian(const char* name, int healthMax, int baseAttack, int baseDefense, Weapon_t primaryWeapon, Weapon_t secondaryWeapon)`: initializes a new object to the values received as parameters. – `int getAttackAmnt() const`: returns the damage that character can do in an attack, using the formula:“`math BASE_ATTACK + frac{WEAPON_1_DAMAGE}{2} + frac{WEAPON_2_DAMAGE}{2} “`In this design, it is assumed that `Weapon_t` template type supports conversion to `double` operator that will return the damage the weapon can do; this operator can be used with `static_cast`. – `int getDefenseAmnt() const`: return the base defense value. – `Character* clone() const`: dynamically creates a copy of the current instance and returns its address to the client. – `void attack(Character* enemy)`: attacks the enemy received as parameter and inflicts damage to it. – print:“`txt [NAME] is attacking [ENEMY_NAME]. “`– use the special ability to activate any beneficial effects on self. **In this design, it is assumed that the type `Ability_t` has a member function named `useAbility(Character*)`** that will activate the special ability; call this function on `m_ability` member and pass the address of the current instance as a parameter. – retrieve the damage this character can do using the function `getAttackAmnt`. – enhance the damage dealt with any effects that the special ability could apply. **In this design, it is assumed that `Ability_t` has a member function named `transformDamageDealt(int&)`** that will enhance the damage this character can do; call this function on `m_ability` member and pass the damage retrieved earlier.– print:“`txt Barbarian deals [ENHANCED_DAMAGE] melee damage! “`– apply the damage to the enemy, by calling the `takeDamage()` function on the parameter.– `void takeDamage(int dmg)`: some other character inflicts damage to the current barbarian in the amount specified as parameter. This function will modify the damage received using the defense capabilities and the special ability, before calling the base class member to update the health.– print:“`txt [NAME] is attacked for [DAMAGE] damage. Barbarian has a defense of [DEFENSE]. Reducing damage received. “`– the barbarian is able to block some of the damage: subtract the defense amount from the parameter. The new value cannot be less than 0. – use the special ability to further reduce the damage taken. **In this design, it is assumed that `Ability_t` has a member function named `transformDamageReceived(int&)`** that could block more damage; call this function on `m_ability` member and pass the damage calculated earlier. – call `takeDamage()` from the base class and pass the calculated damage to update the health after taking damage. ### `archer` ModuleImplement a templated class named `Archer`, derived from the `CharacterTpL` (all archers have superhealth). The `Archer` class is a concrete class implementing archer-specific logc. Archers do not have any special ability.Template parameters:– `Weapon_t`: the type implementing the weapons the barbarian will handle (e.g., `Crossbow`, `Bow`, etc.).#### Private Members– `m_baseDefense`: a number representing the basic defense of this character – `m_baseAttack`: a number representing the basic attack power of this character – `m_weapon`: an object of type `Weapon_t` representing the weapon the character can use in battle.#### Public Members– `Archer(const char* name, int healthMax, int baseAttack, int baseDefense, Weapon_t weapon)`: initializes a new object to the values received as parameters. – `int getAttackAmnt() const`: returns the damage that character can do in an attack, using the formula:“`math 1.3 times BASE_ATTACK “`In this implementation the weapon is ignored.– `int getDefenseAmnt() const`: return the defense of this archer, using the formula:“`math 1.2 times BASE_DEFENSE “`– `Character* clone() const`: dynamically creates a copy of the current instance and returns its address to the client. – `void attack(Character* enemy)`: attacks the enemy received as parameter and inflicts damage to it. – print:“`txt [NAME] is attacking [ENEMY_NAME]. “`– retrieve the damage this character can do using the function `getAttackAmnt`. – print:“`txt Archer deals [ENHANCED_DAMAGE] ranged damage! “`– apply the damage to the enemy, by calling the `takeDamage()` function on the parameter.– `void takeDamage(int dmg)`: some other character inflicts damage to the current archer in the amount specified as parameter. This function will modify the damage received using the defense capabilities, before calling the base class member to update the health.– print:“`txt [NAME] is attacked for [DAMAGE] damage. Archer has a defense of [DEFENSE]. Reducing damage received. “`– the archer is able to block some of the damage: subtract the defense amount from the parameter. The new value cannot be less than 0. – call `takeDamage()` from the base class and pass the calculated damage to update the health after taking damage. ### `rogue` ModuleImplement a templated class named `Rogue`, derived from the `CharacterTpL`. The `Rogue` class is a concrete class implementing barbarian-specific logc. A rogue will always use a dagger as a wapon, and two special abilities.Template parameters:– `T`: the type of the object storing the health. This type is passed to the base class. – `FirstAbility_t`: the type implementing the first special ability that this rogue has (e.g., `Fireball`, `Healing`, etc.). – `SecondAbility_t`: the type implementing the second special ability that this rogue has (e.g., `Fireball`, `Healing`, etc.).#### Private Members– `m_baseDefense`: a number representing the basic defense of this character – `m_baseAttack`: a number representing the basic attack power of this character – `m_firstAbility`: and object of type `FirstAbility_t` representing the first special ability of this character. – `m_secondAbility`: and object of type `SecondAbility_t` representing the second special ability of this character. – `m_weapon`: an object of type `seneca::Dagger`, representing the two weapons the character can use in battle.#### Public Members– `Rogue(const char* name, int healthMax, int baseAttack, int baseDefense)`: initializes a new object to the values received as parameters. – `int getAttackAmnt() const`: returns the damage that character can do in an attack, using the formula:“`math BASE_ATTACK + 2 times WEAPON_DAMAGE “`Use the conversion to `double` operator from class `seneca::Dagger` to find out how much damage the dagger can do; this operator can be used with `static_cast`. – `int getDefenseAmnt() const`: return the base defense value. – `Character* clone() const`: dynamically creates a copy of the current instance and returns its address to the client. – `void attack(Character* enemy)`: attacks the enemy received as parameter and inflicts damage to it. – print:“`txt [NAME] is attacking [ENEMY_NAME]. “`– use the first special ability to activate any beneficial effects on self. **In this design, it is assumed that the type `FirstAbility_t` has a member function named `useAbility(Character*)`** that will activate the special ability; call this function on `m_firstAbility` member and pass the address of the current instance as a parameter. – use the second special ability to activate any beneficial effects on self. **In this design, it is assumed that the type `SecondAbility_t` has a member function named `useAbility(Character*)`** that will activate the special ability; call this function on `m_secondAbility` member and pass the address of the current instance as a parameter. – retrieve the damage this character can do using the function `getAttackAmnt`. – enhance the damage dealt with any effects that the first special ability could apply. **In this design, it is assumed that `FirstAbility_t` has a member function named `transformDamageDealt(int&)`** that will enhance the damage this character can do; call this function on `m_firstAbility` member and pass the damage retrieved earlier. – enhance the damage dealt with any effects that the second special ability could apply. **In this design, it is assumed that `SecondAbility_t` has a member function named `transformDamageDealt(int&)`** that will enhance the damage this character can do; call this function on `m_secondAbility` member and pass the damage calculated earlier.– print:“`txt Rogue deals [ENHANCED_DAMAGE] melee damage! “`– apply the damage to the enemy, by calling the `takeDamage()` function on the parameter.– `void takeDamage(int dmg)`: some other character inflicts damage to the current rogue in the amount specified as parameter. This function will modify the damage received using the defense capabilities and the special abilities, before calling the base class member to update the health.– print:“`txt [NAME] is attacked for [DAMAGE] damage. Rogue has a defense of [DEFENSE]. Reducing damage received. “`– the rogue is able to block some of the damage: subtract the defense amount from the parameter. The new value cannot be less than 0. – use the first special ability to further reduce the damage taken. **In this design, it is assumed that `FirstAbility_t` has a member function named `transformDamageReceived(int&)`** that could block more damage; call this function on `m_firstAbility` member and pass the damage calculated earlier. – use the second special ability to further reduce the damage taken. **In this design, it is assumed that `SecondAbility_t` has a member function named `transformDamageReceived(int&)`** that could block more damage; call this function on `m_secondAbility` member and pass the damage calculated earlier. – call `takeDamage()` from the base class and pass the calculated damage to update the health after taking damage. ### `team` ModuleDesign and code a class named `Team` that manages a dynamically allocated collection of characters *in the form of an array*. Because `Character` is abstract and cannot be instantiated, this class should work with an array of **pointers** to `Character`. At minimum, this class should store the address of the array and a string with the name of this team; add any other private members and any public special operations that your design requires.The `Team` is in **composition** relation with `Character`.#### Public Members– default constructor – `Team(const char* name)`: creates a team with the name specified as parameter and no members. – rule of 5 – `void addMember(const Character* c)`: adds the character received as parameter to the team ONLY IF the team doesn’t have a character with the same name. Resize the array if necessary. Use the `Character::clone()` function to make a copy of the parameter. – `void removeMember(const std::string& c)`: searches the team for a character with the name received as parameter and removes it from the team. – `Character* operator[](size_t idx) const`: returns the character ar the index specified as parameter, or null if the index is out of bounds. – `void showMembers() const`: prints to screen the content of current object in the format:“`txt [Team] TEAM_NAME 1: FIRST_CHARACTER 2: SECOND_CHARACTER 3: THIRD_CHARACTER … “`Use the `operator Please note that a successful submission does not guarantee full credit for this workshop. If the professor is not satisfied with your implementation, your professor may ask you to resubmit. Resubmissions will attract a penalty.

$25.00 View

[SOLVED] Oop345 workshop #1: dictionary in this workshop, you will create a dictionary application that will allow the client

In this workshop, you will create a dictionary application that will allow the client to find the definition(s) of a word in English. The application will load the list of words and their definitions from a text file; although the provided file is for English language (a version of [*The Gutenberg Webster’s Unabridged Dictionary*](https://www.gutenberg.org/ebooks/673) in `csv` format), the application should work with dictionary in other languages. ## Compiling and Testing Your ProgramAll your code should be compiled using this command on `matrix`:“`bash /usr/local/gcc/10.2.0/bin/g++ -Wall -std=c++17 -g -o ws file1.cpp file2.cpp … “`– `-Wall`: compiler will report all warnings – `-std=c++17`: the code will be compiled using the C++17 standard – `-g`: the executable file will contain debugging symbols, allowing *valgrind* to create better reports – `-o ws`: the compiled application will be named `ws`After compiling and testing your code, run your program as following to check for possible memory leaks (assuming your executable name is `ws`):“`bash valgrind –show-error-list=yes –leak-check=full –show-leak-kinds=all –track-origins=yes ws “`– `–show-error-list=yes`: show the list of detected errors – `–leak-check=full`: check for all types of memory problems – `–show-leak-kinds=all`: show all types of memory leaks identified (enabled by the previous flag) – `–track-origins=yes`: tracks the origin of uninitialized values (`g++` must use `-g` flag for compilation, so the information displayed here is meaningful).To check the output, use a program that can compare text files. Search online for such a program for your platform, or use `diff` available on `matrix`. ## DictionaryThis application loads a set of words from a file in `csv` format (comma separated values), stores them in memory, and performs some operations on them. The application will also measure the duration of certain operations, allowing the user to compare their performance. The time it takes to complete an operation will depend on the machine where the application is running; however, the relative difference in performance between various operation should hold regardless of the underlying hardware.The input file will contain a large set of records; each record is stored on a single line in the format:“`txt word,pos,definition “`Where:– `word` is the word being defined (might contain spaces). – `pos` is the part of speech. – `definition` is the definition of the word.Put all the global variables, global functions/operator overloads, and types inside the `seneca` namespace. ### `tester_1` Module (supplied)**Do not modify this module!** Study the code supplied and make sure you understand it. ### `settings` ModuleThe `settings` will contain functionality regarding configuration of the application. Design and code a structure named `Settings`; in the header, *declare* a global variable of this type named `g_settings` and define it in the implementation file.For simplicity reasons, this type will contain only public data-members and no member-functions.#### Public Members– `m_show_all` as a Boolean attribute; when `true`, if a word has multiple definitions, all definitions should be printed on screen, otherwise only the first definition should be shown (default `false`). – `m_verbose` as a Boolean attribute; when `true`, print to screen the part-of-speech of a word if it exists (default `false`). – `m_time_units` as a `std::string` attribute; stores the time units to be used when printing duration of various operations. Possible values are `seconds`, `milliseconds`, `microseconds`, `nanoseconds` (default `nanoseconds`). ### `event` ModuleDesign and code a class named `Event` that stores information about a single event that happened during the execution of the program. At minimum, this class should store the name of the event (as a string) and its duration (as an object of type `std::chrono::nanoseconds`); add any other private members that your design requires.#### Public Members– a default constructor – `Event(const char* name, const std::chrono::nanoseconds& duration)`: initializes the current instance with the values of the parameters.#### Friend Helpers– `operator > See that in the sample output the *move operations* are **many orders of magnitude** faster than the *copy operations*. If your output does not have such a significant difference in times, keep working on your implementation (the actual numbers will be different every time you run the application).> [!CAUTION] > Please note that a matching output is not a guarantee that the program is bug-free; it only means that in the specific tests this tester performed, no bugs/issues were identified. It is possible to write a tester that looks at other aspects of your code that will reveal bugs.  ### SubmissionTo test and demonstrate execution of your program use the same data as shown in the sample output.Upload the source code to your `matrix` account. Compile and run your code using the latest version of the `g++` compiler (available at `/usr/local/gcc/10.2.0/bin/g++`) and make sure that everything works properly.Then, run the following command from your account (replace `profname.proflastname` with your professor’s Seneca userid):“`bash ~profname.proflastname/submit 345_w1 “`and follow the instructions.> [!WARNING] > Please note that a successful submission does not guarantee full credit for this workshop. If the professor is not satisfied with your implementation, your professor may ask you to resubmit. Resubmissions will attract a penalty.

$25.00 View

[SOLVED] Oop244 workshop #9: derived classes and resources in this workshop, you are to code/complete two classes: – **text:** a class that can load

In this workshop, you are to code/complete two classes: – **Text:** A class that can load the contents of a text file into memory and insert it into ostream. – **HtmlText**: A **Text** Class that has a title and can insert the text contents of the class into ostream in simple HTML format.## Learning OutcomesUpon successful completion of this workshop, you will have demonstrated the abilities to:– Apply [the rule of three](https://en.wikipedia.org/wiki/Rule_of_three_(C%2B%2B_programming)) and its derived class. – Use your acquired skills throughout the semester to read a file into dynamically allocated memory. – describe what you have learned in completing this workshop## Submission PolicyThe workshop is divided into one coding part and one non-coding part:– Part 1 (**LAB**): A step-by-step guided workshop, worth 100% of the workshop’s total mark > Please note that the part 1 section is **not to be started in your first session of the week**. You should start it on your own before the day of your class and join the first session of the week to ask for help and correct your mistakes (if there are any). – Part 2 (reflection): non-coding part. The reflection doesn’t have marks associated with it but can incur a **penalty of max 40% of the whole workshop’s mark** if your professor deems it insufficient (you make your marks from the code, but you can lose some on the reflection).## Due Dates Part 1 (lab) is due 2 days after your lab day and Part 2 (Reflection) is due 2 days after your lab day.The Due dates depend on your section. Please choose the “-due” option of the submitter program to see the exact due date of your section:> Note that the submission usually opens by the end of Monday. “`bash ~profname.proflastname/submit 2??/wX/pY_sss -due “` – Replace **??** with your subject code (`00 or 44`) – Replace **X** with Workshop number: [`1 to 10`] – Replace **Y** with the part number: [`1 or 2`] – Replace **sss** with the section: [`naa, nbb, nra, zaa, etc…`]## Late penalties You are allowed to submit your work up to 2 days after the due date with a 50% penalty for each day. After that, the submission will be closed and the mark will be zero.## CitationEvery file that you submit must contain (as a comment) at the top: **your name**, **your Seneca email**, **Seneca Student ID** and the **date** when you completed the work.### For work that is done entirely by you (ONLY YOU)If the file contains only your work or the work provided to you by your professor, add the following message as a comment at the top of the file:> I have done all the coding by myself and only copied the code that my professor provided to complete my workshops and assignments.### For work that is done partially by you.If the file contains work that is not yours (you found it online or somebody provided it to you), **write exactly which part of the assignment is given to you as help, who gave it to you, or which source you received it from.** By doing this you will only lose the mark for the parts you got help for, and the person helping you will be clear of any wrongdoing.> – Add the citation to the file in which you have the borrowed code > – In the ‘reflect.txt` submission of part 2 (Reflection), add exactly what is added to which file and from where (or whom).> :warning: This [Submission Policy](#submission-policy) only applies to the workshops. All other assessments in this subject have their own submission policies.### If you have helped someone with your codeIf you have helped someone with your code. Let them know of these regulations and in your ‘reflect.txt’ of part 2 (Reflection), write exactly which part of your code was copied and who was the recipient of this code.By doing this you will be clear of any wrongdoing if the recipient of the code does not honour these regulations.## Compiling and Testing Your ProgramAll your code should be compiled using this command on `matrix`:“`bash g++ -Wall -std=c++11 -g -o ws file1.cpp file2.cpp … “`– `-Wall`: the compiler will report all warnings – `-std=c++11`: the code will be compiled using the C++11 standard – `-g`: the executable file will contain debugging symbols, allowing *valgrind* to create better reports – `-o ws`: the compiled application will be named `ws`After compiling and testing your code, run your program as follows to check for possible memory leaks (assuming your executable name is `ws`):“`bash valgrind –show-error-list=yes –leak-check=full –show-leak-kinds=all –track-origins=yes ws “`– `–show-error-list=yes`: show the list of detected errors – `–leak-check=full`: check for all types of memory problems – `–show-leak-kinds=all`: show all types of memory leaks identified (enabled by the previous flag) – `–track-origins=yes`: tracks the origin of uninitialized values (`g++` must use `-g` flag for compilation, so the information displayed here is meaningful).To check the output, use a program that can compare text files. Search online for such a program for your platform, or use *diff* available on `matrix`.> Note: All the code written in workshops and the project must be implemented in the **seneca** namespace, unless instructed otherwise.### Custom code submissionIf you have any additional custom code, (i.e. functions, classes etc) that you want to reuse in the workshop save them under a module called Utils (`Utils.cpp and Utils.h`) and submit them with your workshop using the instructions in the “[Submitting Utils Module](#submitting-utils-module)” section.# Part 1 – LAB (100%) ## Text class**Text** class is created using a file name. If the name is not null it will store it dynamically in the memory and read it from the disk into a dynamically allocated memory (using the member function **read**).“`C++ class Text { char* m_filename; char* m_content; int getFileLength()const; // Code provided in Text.cpp protected: const char& operator[](int index)const; public: Text(const char* filename=nullptr); // implement rule of three herevoid read(); virtual void write(std::ostream& os)const; }; // prototype of insertion overload into ostream goes here“` ### Properties #### m_filename (private) Hold the name of the file dynamically #### m_content (private) Holds the content of the text file dynamically. ### Mandatory functionalities > if anything goes wrong in setting up the class or reading a file, it will be set to an empty state.#### const char& operator[](int index)const; This index operator provides read-only access to the content of the text for the derived classes of Text.The behaviour of the operator is not defined if the index goes out of bounds.#### The rule of three Implement the rule of three so memory is managed properly in case of copying and assignment.#### int getFileLength()const Code provided (int Text.cpp) ; it returns the length (size) of the text file on the disk. It returns zero if either the file does not exist or the content is empty.#### void read() First, read will the current content of the file and then allocates memory to the [size of the file on the disk](#int-getfilelengthconst) + 1 (for the null byte).Then it will read the contents of the file character by character into the newly allocated memory and terminates it with a null byte at the end.#### virtual void write(std::ostream& os)const; This virtual function will insert the content of the Text class into the ostream if m_content is not null.#### instertion overload into ostream Overload the insertion operator for a Text object into ostream.### Usage SampleIf the file **test.txt” has the following content: “`text abc defg “`Having the following code snippet: “`C++ Text T(“test.txt”); Text Y(“whatever.txt”); // whatever.txt can exist or not Text Z; Y = T; Z = Y; Text X = Z; cout :warning:**Important:** Please note that a successful submission does not guarantee full credit for this workshop. If the professor is not satisfied with your implementation, your professor may ask you to resubmit. Re-submissions will attract a penalty. act a penalty

$25.00 View

[SOLVED] Oop244 workshop #8: virtual functions and abstract base classes in this workshop, you will create a hierarchy of classes to practice

In this workshop, you will create a hierarchy of classes to practice and understand the role of virtual functions in inheritance. The workshop consists of 4 classes: – Shape; encapsulates a shape that can be drawn on the screen (An interface, that is an abstract base class with only pure virtual functions) – LblShape; encapsulates a shape that can be labelled (An abstract base class that represents a labelled shape) – Line; encapsulates a horizontal line on a screen with the label (this concrete class draws a labelled line) – Rectangle; encapsulates a rectangle on the screen that can be labelled (this concrete class draws a rectangle with a label inside).## Learning OutcomesUpon successful completion of this workshop, you will have demonstrated the abilities to:– define pure virtual functions – create abstract base classes – implement behaviour using virtual functions – explain the difference between an abstract base class and a concrete class – describe what you have learned in completing this workshop## Submission PolicyThe workshop is divided into one coding part and one non-coding part:– Part 1 (**LAB**): A step-by-step guided workshop, worth 100% of the workshop’s total mark > Please note that the part 1 section is **not to be started in your first session of the week**. You should start it on your own before the day of your class and join the first session of the week to ask for help and correct your mistakes (if there are any). – Part 2 (reflection): non-coding part. The reflection doesn’t have marks associated with it but can incur a **penalty of max 40% of the whole workshop’s mark** if your professor deems it insufficient (you make your marks from the code, but you can lose some on the reflection).## Due Dates Part 1 (lab) is due 2 days after your lab day and Part 2 (Reflection) is due 2 days after your lab day.The Due dates depend on your section. Please choose the “-due” option of the submitter program to see the exact due date of your section:> Note that the submission usually opens by the end of Monday. “`bash ~profname.proflastname/submit 2??/wX/pY_sss -due “` – Replace **??** with your subject code (`00 or 44`) – Replace **X** with Workshop number: [`1 to 10`] – Replace **Y** with the part number: [`1 or 2`] – Replace **sss** with the section: [`naa, nbb, nra, zaa, etc…`]## Late penalties You are allowed to submit your work up to 2 days after the due date with a 50% penalty for each day. After that, the submission will be closed and the mark will be zero.## CitationEvery file that you submit must contain (as a comment) at the top: **your name**, **your Seneca email**, **Seneca Student ID** and the **date** when you completed the work.### For work that is done entirely by you (ONLY YOU)If the file contains only your work or the work provided to you by your professor, add the following message as a comment at the top of the file:> I have done all the coding by myself and only copied the code that my professor provided to complete my workshops and assignments.### For work that is done partially by you.If the file contains work that is not yours (you found it online or somebody provided it to you), **write exactly which part of the assignment is given to you as help, who gave it to you, or which source you received it from.** By doing this you will only lose the mark for the parts you got help for, and the person helping you will be clear of any wrongdoing.> – Add the citation to the file in which you have the borrowed code > – In the ‘reflect.txt` submission of part 2 (Reflection), add exactly what is added to which file and from where (or whom).> :warning: This [Submission Policy](#submission-policy) only applies to the workshops. All other assessments in this subject have their own submission policies.### If you have helped someone with your codeIf you have helped someone with your code. Let them know of these regulations and in your ‘reflect.txt’ of part 2 (Reflection), write exactly which part of your code was copied and who was the recipient of this code.By doing this you will be clear of any wrongdoing if the recipient of the code does not honour these regulations.## Compiling and Testing Your ProgramAll your code should be compiled using this command on `matrix`:“`bash g++ -Wall -std=c++11 -g -o ws file1.cpp file2.cpp … “`– `-Wall`: the compiler will report all warnings – `-std=c++11`: the code will be compiled using the C++11 standard – `-g`: the executable file will contain debugging symbols, allowing *valgrind* to create better reports – `-o ws`: the compiled application will be named `ws`After compiling and testing your code, run your program as follows to check for possible memory leaks (assuming your executable name is `ws`):“`bash valgrind –show-error-list=yes –leak-check=full –show-leak-kinds=all –track-origins=yes ws “`– `–show-error-list=yes`: show the list of detected errors – `–leak-check=full`: check for all types of memory problems – `–show-leak-kinds=all`: show all types of memory leaks identified (enabled by the previous flag) – `–track-origins=yes`: tracks the origin of uninitialized values (`g++` must use `-g` flag for compilation, so the information displayed here is meaningful).To check the output, use a program that can compare text files. Search online for such a program for your platform, or use *diff* available on `matrix`.> Note: All the code written in workshops and the project must be implemented in the **seneca** namespace, unless instructed otherwise.### Custom code submissionIf you have any additional custom code, (i.e. functions, classes etc) that you want to reuse in the workshop save them under a module called Utils (`Utils.cpp and Utils.h`) and submit them with your workshop using the instructions in the “[Submitting Utils Module](#submitting-utils-module)” section.# Part 1 – LAB (100%)Implement four modules for the following classes; **Shape, LblShape, Line** and **Rectangle**![Classes](images/classes.png)## 1- The `Shape` interface### Create the following two [Pure virtual functions](https://intro2oop.sdds.ca/E-Polymorphism/abstract-base-classes#pure-virtual-function): > In C++, a [Pure virtual function](https://intro2oop.sdds.ca/E-Polymorphism/abstract-base-classes#pure-virtual-function) is a virtual function that has no implementation in the base class. It is declared by setting its prototype to zero (`= 0;`) in the class declaration. This makes the class an interface, which cannot be instantiated. Any derived class must provide an implementation for this function, unless the derived class is also abstract.#### draw Returns void and receives a reference to **ostream** as an argument. This pure virtual function can not modify the current object.#### getSpecs Returns void and receives a reference to **istream** as an argument.### `destructor` Create a default virtual destructor for the shape interface. > To create a default virtual destructor for a class , you would declare it in the class definition using `virtual ~ClassName() = default;`. This tells the compiler to generate a default destructor. This guarantees that any dynamically allocated derived class from the Shape interface pointed by a Shape pointer will be removed properly from memory when deleted. No implementation is required in this case, as the compiler automatically generates it. This is a good practice in object-oriented programming in C++.### `Shape` helper functions Overload the insertion and extraction operators (using draw and getSpecs functions) so any shape object can be written or read using ostream and istream.## 2- The `LblShape` abstract Class (the Labeled Shape class) Inherit an abstract class from the interface `Shape` called `LblShape`. This class adds a label to a `Shape`.This class will implement the pure virtual function **getSpecs** but will not implement the draw function; therefore it remains abstract.### Private Member variable Add a character pointer member variable called **m_label** and initialize it to null. This member variable will be used to hold the dynamically allocated label for the `Shape`.### Protected members #### “` label() “` Add a query called **label** that returns the unmodifiable value of m_label member variable.### public members #### Default (no argument) constructor Sets the label pointer to null. (You don’t need to do this if the **m_label** is already initialized to null) #### One argument constructor Allocates memory large enough to hold the incoming Cstring argument pointed by the **m_label** member variable. Then copies the Cstring argument to the newly allocated memory. #### Destructor Deletes the memory pointed by **m_label** member variable. #### deleted actions The copy constructor and assignment operator are deleted to prevent copying or assignment of instances of this class. #### getSpecs Reads a comma-delimited Cstring form istream: Override the **Shape::getSpecs** pure virtual function to receive a Cstring (a label) from **istream** up to the **’,’** character (and then extract and ignore the **comma**). Afterward, follow the same logic as was done in the one argument constructor; allocate memory large enough to hold the Cstring and copy the Cstring into the newly allocated memory.## 3- The `Line` concrete class Line inherits the **LblShape** class to create a horizontal line with a label.### Private Member variable Create a member variable called **m_length** to hold the length of the **Line** in characters.#### Default (no argument) constructor Sets the **m_length** member variable to zero, and invokes the default constructor of the base class. #### Two argument constructor Receives a Cstring and a value for the length of the line. Passes the Cstring to the constructor of the base class and sets the **m_length** member variable to the value of the second argument.#### Destructor This class has no destructor implemented.#### getSpecs Reads comma-separated specs of the **Line** from istream. This function overrides the **getSpecs** function of the base class as follows. First, it will call the **getSpecs** function of the base class then it will read the value of the m_length attribute from the istream argument, and then it will ignore The rest of the characters up to and including the newline character **’ ’**.#### draw This function overrides the draw function of the base class. If the **m_length** member variable is greater than zero and the **label()** is not null, this function will first print the **label()** and then go to the new line. Afterwards it keeps printing the **’=’** (assignment character) to the value of the **m_length** member variable. Otherwise, it will take no action.For example, if the Cstring returned by the label query is “Separator” and the length is 40, the draw function should insert the following into ostream:“`Text Separator ======================================== “` ## 3- The `Rectangle` concrete class The Rectangle class inherits the **LblShape** class to create a frame with a label inside.### Private Member variable Create two member variables called **m_width** and **m_height** to hold the width and the height of a rectangular frame (number of characters).#### Default (no argument) constructor Sets the width and height member variables to zero. It will also invoke the default constructor of the base class.#### Three argument constructor Receives a Cstring for the label, and two values for the width and height of the **Rectangle** from the argument list. Passes the Cstring to the constructor of the base class and sets the **m_width** and **m_height** member variables to the corresponding values received from the argument list. However if the **m_height** is less than 3 or **m_width** is less the length of the **label() + 2** it will set the Rectangle to an empty state.#### Destructor This class has no destructor implemented.#### getSpecs Reads comma-separated specs of the **Rectangle** from istream. This function overrides the **getSpecs** function of the base class as follows. First, it will call the **getSpecs** function of the base class, then it will read two comma-separated values from istream for **m_width** and **m_length** and then ignores the rest of the characters up to and including the newline character (**’ ’**).#### draw This function overrides the draw function of the base class. If the Rectangle is not in an empty state, this function will draw a rectangle with a label inside as follows, otherwise, it will do nothing:First line: prints ‘+’, then prints the ‘-‘ character (m_width – 2) times and then prints ‘+’ and goes to newline.Second line: prints ‘|’, then in (m_width-2) spaces it prints the **label()** left justified and then prints ‘|’ and goes to new line.In next (m_height – 3) lines: prints ‘|’, (m_width-2) spaces then prints ‘|’ and goes to new line.Last line: exactly like first line.For example, if the Cstring returned by the label query is “Container”, the width is 30 and the height is 5, this function should insert the following into ostream:“`Text +—————————-+ |Container | | | | | +—————————-+ “` ## `main` Module (supplied)[main.cpp](lab/main.cpp)**Do not modify this module!** Walk through the code and make sure you understand it.### Correct output[correct_output.txt](lab/correct_output.txt)## PART 1 Submission ### Files to submit “`Text Shape.h Shape.cpp LblShape.h LblShape.cpp Line.h Line.cpp Rectangle.h Rectangle.cpp main.cpp “` ### Submission Process:Upload the files listed above to your `matrix` account. Compile and run your code using the `g++` compiler as shown in [Compiling and Testing Your Program](#compiling-and-testing-your-program) and make sure that everything works properly.Then, run the following command from your matrix account“`bash ~profname.proflastname/submit 2??/wX/pY_sss “` – Replace **??** with your subject code (`00 or 44`) – Replace **X** with Workshop number: [`1 to 10`] – Replace **Y** with the part number: [`1 or 2`] – Replace **sss** with the section: [`naa, nbb, nra, zaa, etc…`]and follow the instructions.#### Submitting Utils Module To have your custom Utils module compiled with your workshop and submitted, add a **u** to the part number of your workshop (i.e **u**p1 for part one and **u**p2 for part two) and issue the following submission command instead of the above: “`text ~profname.proflastname/submit 2??/wX/upY_sss “` See [Custom Code Submission](#custom-code-submission) section for more detail> :warning:**Important:** Please note that a successful submission does not guarantee full credit for this workshop. If the professor is not satisfied with your implementation, your professor may ask you to resubmit. Re-submissions will attract a penalty# Part 2: ReflectionStudy your final solutions for each deliverable of the workshop **and the most recent milestones of the project if applicable**, reread the related parts of the course notes, and make sure that you have understood the concepts covered by this workshop. **This should take no less than 30 minutes of your time and the result is suggested to be between 150 to 300 words in length.**Create a file named `reflect.txt` that contains your detailed description of the topics that you have learned in completing this workshop and **the project milestones if applicable** and mention any issues that caused you difficulty.### Reflection Submission Process:Upload `reflect.txt` to matrix containing the reflectionThen, run the following command from your matrix account“`bash ~profname.proflastname/submit 2??/wX/pY_sss “` – Replace **??** with your subject code (`00 or 44`) – Replace **X** with Workshop number: [`1 to 10`] – Replace **Y** with the part number: [`1 or 2`] – Replace **sss** with the section: [`naa, nbb, nra, zaa, etc…`]and follow the instructions.> :warning:**Important:** Please note that a successful submission does not guarantee full credit for this workshop. If the professor is not satisfied with your implementation, your professor may ask you to resubmit. Re-submissions will attract a penalty. act a penalty

$25.00 View