99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

代寫COMP0034、代做Java/Python程序設(shè)計
代寫COMP0034、代做Java/Python程序設(shè)計

時間:2024-11-13  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯



Coursework 1: Student examples
The following examples are extracts from previous students' coursework to provide a guidance as to the standards expected. Full coursework
examples are not provided to avoid potential issues with copying (plagiarism); and also as the coursework changes each year so there is no
single coursework that matches the current.
These are examples of actual student work and not 'templates' to copy.
The coursework specifications vary each year, do not assume that the examples comply with the guidance in this year's specification.
Always refer to the current specification.
Start work early on your coursework and use the tutorials to gain formative feedback.
The 'good' examples are drawn from 2:1 or distinction responses, but marks are not provided.
The examples are accompanied by a brief comment to explain what was considered either "good" or "could be improved" about the given
example.
Past students have published their code with 'public' visibility on GitHub; you must not copy their code. Copying from other students, past or
present, is not appropriate even when correctly cited.
Section 1: Data exploration and preparation
Please note that last year this section was not separated into understanding and preparation in the same way as it is this year. The following
examples will give you an understanding of standard, but do not exactly match what you are asked to do this year.
General guidance
Describe and explain the steps you took, your findings, and any decisions you made.
For example, if you identify code quality issues then explain how you addressed these; or if you chose not to address them, then explain the
reason for not addressing them.
Do not focus on interpreting the data as if for a particular audience. For example, comments such as "I created _X_ chart to explore the range of
values of _Y_ variable and found that there were _Z_ outliers" are relevant in the context of this coursework; comments such as "The data shows
that more people migrated from in London in 2023 than in 2022." is not relevant in the context of this coursework.
Example 1: Boundary between High pass/Merit
Feedback: "The code shows some understanding of the use of pandas, though you could have done more to describe the data using the pandas
functions to show size, data types, ranges of values, etc. There is some attempt at adding structure in functions though it's a little jumbled. Try
and separate out the functions from the 'main' where you then call the functions."
It wasn't clear why the student commented out the code to create the charts.
The text supported the written code and evidenced that the student has gained a good understanding through applying the code, however the
code and explanation combined was not sufficient to attain a higher mark.
Student's code (parts removed to reduce the length of this page)
import pandas as pd
import matplotlib.pyplot as plt
if __name__ == '__main__':
 df = pd.read_csv('dataset.csv')
# first, lets translate all the data from german to english
 def ger_to_eng(dataframe):
 column_rename_map = {
 "StichtagDatJahr": "ReferenceYear", 
 "DatenstandCd": "Status",
 ...removed...
 
1/11
 }
 dataframe.rename(columns = column_rename_map, inplace = True)
 word_mapping = {
 'm?nnlich': 'male',
 'weiblich': 'female',
 '10- bis 19-J?hrige': '10-19 years old',
 ...removed...
# add more translations as needed
}
dataframe.replace(word_mapping, inplace=True)
# second, cleaning up the data
def del_redundant_cols(dataframe):
# i'll be removing columns which i deem redundant
del_columns = ['AgeGroupCode', 'DogAgeGroupLong', #'DogAgeGroupSort',
'GenderCode', 'DogGenderCode', 'BreedCode']
# delete specified columns
for col in del_columns:
del dataframe[col]
# next let's examine the data a bit
def dog_age_check(dataframe):
age_tally = dataframe['DogAgeGroupCode'].value_counts().sort_index()
age_tally = age_tally.drop(999, errors='ignore') # accounts for the unknown entries, which default to 999, removing them
# plotting a bar chart
plt.bar(age_tally.index, age_tally.values, color='skyblue')
plt.xlabel('Age')
 
2/11
plt.ylabel('Amount of Dogs')
plt.title('Age of Dogs Tabulated')
plt.show()
'''def owned_dog_count(dataframe):
# groupby OwnerID and sum up every number of dogs tied to that owner
dogs_per_person = dataframe.groupby('OwnerID')['NumberOfDogs'].sum().reset_index()
# Display the result
plt.figure(figsize=(10, 6))
dogs_per_person.plot(kind='bar', color='skyblue')
plt.title(f'')
plt.xlabel('Number of owned dogs')
plt.ylabel('Frequency')
plt.show()
plt.bar(dogs_per_person.index, dogs_per_person.values, color='skyblue')
plt.xlabel('Number of owned dogs')
plt.ylabel('Frequency')
plt.title('Number of dogs owned by a single person')
plt.show()
plt.figure(figsize=(10, 6))
plt.bar(dogs_per_person.index, dogs_per_person, color='skyblue')
plt.yscale('log') # Set y-axis to logarithmic scale
plt.title(f'Frequency Bar Graph for num_dog')
plt.xlabel("num of dogs")
plt.ylabel('Logarithmic Frequency')
plt.show()
def create_dog_bar_chart(dataframe):
owner_counts = dataframe.groupby('OwnerID')['NumberOfDogs'].nunique().reset_index()
# Plotting the bar chart
 
3/11
plt.bar(owner_counts['NumberOfDogs'], owner_counts['OwnerID'])
plt.xlabel('Number of Dogs Owned')
plt.ylabel('Number of Owners')
plt.title('Number of Owners for Each Number of Dogs Owned')
plt.show()'''
ger_to_eng(df)
del_redundant_cols(df)
dog_age_check(df)
df.to_csv('dataset_prepared.csv')
 
4/11
Example 2: Distinction
 
5/11
Feedback: "Great use of functions, comments and docstrings. Some great work on cleaning the data, especially considering how you would
detect outliers. Great visualisation of the data, what does this mean for your product??"
The code from the functions below has been removed, however the student went beyond what was taught in the course.
Code:
import math
import matplotlib.pyplot as plt
import pandas as pd
import warnings
from datetime import timedelta
warnings.simplefilter(action="ignore", category=FutureWarning)
# ignore the waring from df.approve
def print_general_statistics(df):
 """
 print the general information about the dataframe;
 print first 5 rows and all the columns of the data frame;
 demonstrate number of row and column of the data frame;
 print the data types; general statics information of the data frame
 Args:
 df: The data frame imported.
 """
 pd.set_option(
 "display.max_columns", None
 ) # set all the columns visible in the terminal printing
 pd.set_option("display.width", None)
 print("\nthe first 5 rows of dataframe :\n")
 print(df.head(12))
 print("\nThe Rows and Columns number:\n")
 print("\nRow Number :" + str(df.shape[0]))
 print("\nColumn Number :" + str(df.shape[1]))
 print("\nColumn data types:\n")
 print(df.dtypes)
 print("\nStatistics:\n")
 print(df.describe()) # Add your code inside the brackets
def null_data_detection(df):
def time_stamp_format_convert(df):
def breaking_point_detection(df):
def interpolation(df):
def timestamp_delete(df):
def outlier_detection(df, window_size=5, threshold=3):
def different_activity_frame_division(df):
def statics_histogram(df, name):
def statistics_boxplot(df, name):
def smoothing(df):
def smoothing_all(df):
if __name__ == '__main__':
 
 # read dataframe from csv
 df_raw = pd.read_csv("dataset.csv")
 # detect whether there is null data from the dataframe
 # print the general statistics
 print_general_statistics(df_raw)
 # detect null values
 df_after_null_preprocess = null_data_detection(df_raw)
 
6/11
 # convert timestamp to datetype and allow the after calculation
 df_after_time_stamp_convert = time_stamp_format_convert(df_after_null_preprocess)
 print_general_statistics(df_after_time_stamp_convert)
 # detect whether index =20928 has varied or not
 print(df_after_time_stamp_convert.loc[20928, "timestamp"])
 # delete same timestamp in the dataframe
 df_after_delete = timestamp_delete(df_after_time_stamp_convert)
 # interpolate lost data between the datapoints
 df_interpolation = interpolation(df_after_delete)
 # divide dataframe according to Activity
 df_activity_0, df_activity_1 = different_activity_frame_division(df_interpolation)
 # detect the outliers
 outlier_detection(df_activity_0)
 outlier_detection(df_activity_1)
 # draw the figures
 statics_histogram(df_activity_0, "Activity = 0")
 statics_histogram(df_activity_1, "Activity = 1")
 statistics_boxplot(df_activity_0, "Activity = 0")
 statistics_boxplot(df_activity_1, "Activity = 1")
 # smoothing the data frame
 df_final = smoothing_all(df_interpolation)
 df_final = df_final.drop(columns="timestamp_datetype")
 df_final.to_csv("output_file.csv", index=True)
Extract from the report:
This is just the first part so you can see the difference in the quality of the explanations. The student went on to explain the actions taken in each
of their functions, though some of the text was unnecessary as it also described the code which wasn't necessary.
 
7/11
Section 2: Database design and preparation
Please note that this section is substantially different in 2024.
Database design was included as part of the application design in coursework 2, and not coursework 1. The design of the database covered
requirements for their app, and not the data set so is different to what you are asked to do.
Students were not taught the first, second and third normal forms and so the expectations of their coursework was lower and therefore the
grades awarded higher than would be given for the same this year. I have tried to comment on the implications.
Preparation of the database is new to the coursework this year so there are no student examples from COMP0035. Students created the
database in COMP0034 and used different libraries. The students who achieved a higher grade showed originality in their code, tackled
databases with multiple tables, and provided well structured and documented code; those who achieved a high pass tended to just copy the
tutor's example code from the tutorial's and make minor changes to adapt it to their data.
Example 1: High pass
Feedback: "The ERD is a little confusing, why are you storing the account data in two places? The account data is enough. If you plan to create
the activity chart then you probably need a new table with attributes such as user_id, date/time, route they accessed. I assume the Visualisation
Chart is really the demograophic data so the table probably just needs a more meaningful name. A good attempt at each aspect of the design
with a few areas that could be improved."
Example 2: High Merit / Distinction boundary
Feedback given: "The ERD is well drawn and shows an understanding of normalisation. It is consistent with other aspects of the design."
Note that last year students were not taught 1NF, 2NF and 3NF, so this coursework evidenced that the student had carried out some
independent research to understand normalisation, though this is a copy and paste of the normalisation criteria rather than an explanation of
how these were applied in the context of this design.
Extract from student's PDF:
 
8/11
Example 3: Excellent (>70)
Feedback: "The development of the ERD similarly shows a clear grasp of the concepts of database design and normalisation and the resulting
design is well presented and approprate. This is an excellent coursework that not only evidences mastery of the techniques taught but also
clearly evidences an excellent grasp of the implications of the concepts and extensive additional reading."
This student provided a detailed explanation of the steps and decisions they made at each stage of normalisation, discussing the implications of
the choices they made. Their work evidenced that they understood and carefully applied the concepts. This far exceeded what was taught within
the course last year.
Student's ERD diagram only:
 
9/11
Section 3: Tools
Linting was not included in coursework 1 but was included in coursework 2. Source code control was assessed by looking at the commit history
and messages in their repository so cannot be included here. Including the environment files without seeing the students environment will not
give you a meaningful example. Since there is no meaningful way to provide student examples, the following is the feedback given to students
only.
Students achieving this highest marks in this section also provided evidence of the use of tools that went beyond the required tools. I will not list
these as this would then not be exceptional; this is an opportunity for students achieving the higher marks to research tools that support code
quality and development and apply something that is not covered in the course.
Source code control
High pass: "Some use of source code control over a period though you appear to mostly upload files rather than synchronise files between a
local and remote repository."
High merit: "Regular use of source code control with clear and meaningful commit messages."
Distinction: "Regular use of source code control with unique commit referencing. Evidence of effective use of branches and pull requests."
Linting
 
10/11
High pass: The code itself appeared free of issues that would typically be flagged by a linter so some assumption could be made as to effective
linting, but there was little or no evidence provided by the student to explain how they used linter tools to achieve this.
Merit: Provided evidence of using a linter, and the code appeared free of issues. Low merit: may have provided evidence of using the linter but
not then used this to improve the code.
Distinction: Provided evidence of using a linter at different stages; discussed actions taken and in cases where the issues could not be resolved,
gave an appropriate explanation for this. Some used more than one linter and compared the results.
Environment management
High pass: One or more of the required files was missing and/or in general the files were missing some details that prevented them from being
fully usable.
Merit: All files provided and were appropriate to allow the environment to be recreated and their code run. May have had minor issues e.g. a
package missing from requirements.txt, a detail within pyproject.toml missing or incorrect.
Distinction: Some students showed use of different techniques for creating and managing environments; and/or used the files with very specific
detail beyond the basics; and gave very clear guidance in the readme.md that led to the marker being able to successfully create an environment
and run the students code.
Last modified: Saturday, 28 September 2024, 6:36 PM
Previous activity
Data sets: ethics, data set size, collusion
Next activity ?
Examples of web apps from COMP0034
 
請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp





 

掃一掃在手機打開當(dāng)前頁
  • 上一篇:ECE 4122代做、代寫C++編程語言
  • 下一篇:COMP0035代做、代寫python程序語言
  • 無相關(guān)信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計優(yōu)化
    急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計優(yōu)化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發(fā)動機性能
    挖掘機濾芯提升發(fā)動機性能
    海信羅馬假日洗衣機亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
    海信羅馬假日洗衣機亮相AWE 復(fù)古美學(xué)與現(xiàn)代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 豆包 幣安下載 AI生圖 目錄網(wǎng)

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          9000px;">

                国产一区欧美日韩| 一区二区在线观看视频| 日韩欧美国产午夜精品| 亚洲h在线观看| 欧美日韩www| 天堂va蜜桃一区二区三区漫画版| 91亚洲精品乱码久久久久久蜜桃 | 一区二区成人在线| 成人看片黄a免费看在线| 日韩欧美你懂的| eeuss影院一区二区三区| 亚洲精品久久7777| 国产成人精品网址| 性感美女久久精品| 国产一区二区三区黄视频| 国产精品一区久久久久| 91麻豆免费看片| 亚洲成年人影院| 首页综合国产亚洲丝袜| 婷婷中文字幕一区三区| 韩国中文字幕2020精品| av不卡免费在线观看| 欧美日韩亚洲高清一区二区| 丁香婷婷综合激情五月色| 中文一区一区三区高中清不卡| 色素色在线综合| 国产一二精品视频| 首页国产欧美久久| 日韩伦理av电影| 久久精品一区八戒影视| 欧美久久久久久久久久| 日韩在线一二三区| 久久美女艺术照精彩视频福利播放| 亚洲国产精品久久一线不卡| 欧美日本高清视频在线观看| 成人性生交大片免费看中文 | 在线电影国产精品| 亚洲一区av在线| 亚洲国产电影在线观看| 日韩欧美成人一区二区| 在线综合亚洲欧美在线视频| 成人毛片老司机大片| 九九精品视频在线看| 日韩专区中文字幕一区二区| 亚洲精品欧美在线| 亚洲综合久久久久| 亚洲精品国产第一综合99久久| 久久综合九色欧美综合狠狠| 26uuu精品一区二区三区四区在线| 欧美一区二区大片| 日韩欧美激情四射| 久久综合九色欧美综合狠狠 | 97精品电影院| 国产精品18久久久久久久久久久久| 久久综合九色综合欧美就去吻| 中文文精品字幕一区二区| 国产视频一区在线观看| 亚洲自拍偷拍麻豆| 国产91丝袜在线播放九色| 欧美日韩黄色影视| 国产精品天干天干在线综合| 国产精品一区二区在线播放 | 香蕉久久夜色精品国产使用方法 | 93久久精品日日躁夜夜躁欧美| 久久精工是国产品牌吗| 亚洲精品久久7777| 欧美理论电影在线| 成人午夜电影小说| 午夜精品福利久久久| 在线影院国内精品| 蜜桃视频第一区免费观看| 欧洲一区二区三区免费视频| 欧美tickling网站挠脚心| 国产精品一区免费在线观看| 亚洲日本青草视频在线怡红院 | 国产精品久线在线观看| 精品精品欲导航| 亚洲一区二区三区不卡国产欧美| 国产蜜臀97一区二区三区| 欧美一级国产精品| 欧美日韩国产乱码电影| 欧美最猛黑人xxxxx猛交| 91麻豆产精品久久久久久| 国产999精品久久| 成人福利在线看| 色综合天天做天天爱| 日本高清无吗v一区| 日本韩国精品在线| 在线免费观看不卡av| 中文子幕无线码一区tr| 精品免费国产二区三区| 欧美一级日韩一级| 欧美精品一区二区久久久| 久久久久久久一区| 亚洲欧美日韩国产一区二区三区 | 亚洲成av人综合在线观看| 麻豆91在线播放| jlzzjlzz亚洲日本少妇| 欧美三级日本三级少妇99| 日韩一级精品视频在线观看| 99久久久国产精品| 男人的天堂久久精品| 久久综合网色—综合色88| 国产精品538一区二区在线| 成人小视频免费在线观看| 欧美三级午夜理伦三级中视频| 欧美日韩不卡视频| 中文字幕一区av| 国产情人综合久久777777| 色偷偷久久人人79超碰人人澡| 图片区小说区区亚洲影院| 中文字幕在线一区二区三区| 精品国精品国产| 在线不卡一区二区| 91黄视频在线观看| 91看片淫黄大片一级在线观看| 免费观看91视频大全| 亚洲va欧美va人人爽午夜| 亚洲精品乱码久久久久久久久| 国产成人高清在线| 亚洲国产一区二区视频| 一级中文字幕一区二区| 在线成人免费观看| 欧美一区二区三区免费观看视频 | 久久先锋影音av| 久久久久久久久久美女| 日韩午夜在线影院| 欧美成人午夜电影| 国产丝袜在线精品| 国产精品理论片| 亚洲国产中文字幕| 久久99久久精品| 99久久久免费精品国产一区二区| 欧洲色大大久久| 国产乱国产乱300精品| 国产精品人妖ts系列视频| 欧美天堂一区二区三区| 蜜桃精品视频在线观看| 欧美mv日韩mv| 91麻豆福利精品推荐| 日韩影视精彩在线| 欧美高清在线一区二区| aaa国产一区| 国产成人免费在线视频| 亚洲人成亚洲人成在线观看图片 | 亚洲乱码国产乱码精品精的特点| 日韩三级av在线播放| 色综合天天综合网天天狠天天| 秋霞成人午夜伦在线观看| 国产精品美女久久久久久| 欧美一区二区在线观看| 波多野结衣亚洲| 看电视剧不卡顿的网站| 亚洲一区二区av电影| 国产精品视频一二三区 | 91麻豆精品国产91久久久久久久久| 日韩女同互慰一区二区| 亚洲午夜三级在线| 日本二三区不卡| 亚洲精品免费电影| 成人国产精品免费观看视频| 美脚の诱脚舐め脚责91| 日一区二区三区| 蜜桃久久精品一区二区| 国产一区二区三区观看| 国产乱对白刺激视频不卡| 国产精品视频看| 亚洲男人的天堂在线观看| 成人免费一区二区三区视频| 国产精品日韩精品欧美在线| 国产精品国产自产拍高清av | 极品美女销魂一区二区三区| 免费高清视频精品| eeuss鲁片一区二区三区在线观看| 国产精品美女久久久久久久| 色噜噜夜夜夜综合网| 九九九精品视频| 亚洲线精品一区二区三区| 欧美xxxxxxxx| 欧美午夜影院一区| 国产一区二区三区视频在线播放 | 久久精品国产精品青草| 一区二区三区免费在线观看| 亚洲永久精品大片| 成人h版在线观看| 久久99九九99精品| 亚洲成人第一页| 亚洲精品国产成人久久av盗摄 | 成人福利视频网站| 欧美肥妇bbw| 国产精品第四页| 久久精品72免费观看| 色综合久久天天| 26uuu国产一区二区三区| 亚洲综合在线五月| 不卡一区中文字幕| 久久九九国产精品| 久久爱www久久做| 精品久久国产97色综合| 肉色丝袜一区二区|