Dataset Viewer
question_id
stringclasses 10
values | category
stringclasses 1
value | domain
stringclasses 10
values | prompt
stringclasses 10
values | lang
stringclasses 1
value |
|---|---|---|---|---|
47e1fd0c1cd043fbb7223435d51b3fe6
|
arena-hard-v0.1
|
Network Configuration & Security
|
My situation is this: I’m setting up a server running at home Ubuntu to run an email server and a few other online services. As we all know, for my email to work reliably and not get blocked I need to have an unchanging public IP address. Due to my circumstances I am not able to get a static IP address through my ISP or change ISPs at the moment.
The solution I have found is to buy a 4G SIM card with a static IP (from an ISP that offers that), which I can then use with a USB dongle. However this 4G connection costs me substantially per MB to use.
But. Mail is the only server that needs a static IP address. For everything else using my home network connection and updating my DNS records with DDNS would be fine. I have tested this setup previously for other services and it has worked.
So. I was wondering. Would it in theory be possible to: connect the server to two network interfaces at the same time and route traffic depending on destination port. I.e. all outgoing connections to ports 25, 465, 587, and possibly 993 should be sent through the 4G dongle interface (enx344b50000000) and all other connections sent over eth0. Similarly, the server should listen for incoming connections on the same ports on enx344b50000000 and listen on all other ports (if allowed by ufw) on eth0.
I would then need DNS records from mail.mydomain.tld —> <4g static public IP> and mydomain.tld —> <home public IP> (updated with DDNS, and NAT configured on my home router).
Computers on the internet would then be able to seamlessly connect to these two IP addresses, not “realising” that they are in fact the same machine, as long as requests to mail.mydomain.tld are always on the above mentioned ports.
Question: Is this possible? Could it be a robust solution that works the way I hope? Would someone be able to help me set it up?
I have come across a few different guides in my DuckDuckGo-ing, I understand it has to do with setting a mark in iptables and assigning them to a table using ip route. However I haven't managed to get it to work yet, and many of these guides are for VPNs and they all seem to be slightly different to each other. So I thought I would ask about my own specific use case
|
eng
|
7ef923a6af7e4b8480fde56cae992497
|
arena-hard-v0.1
|
Investment Growth Calculations
|
A 20-year annuity of forty $7,000 semiannual payments will begin 12 years from now, with the first payment coming 12.5 years from now.
a. If the discount rate is 13 percent compounded monthly, what is the value of this annuity 6 years from now?
b. What is the current value of the annuity?
|
eng
|
addaa796ee094f029f8014ea1468df8a
|
arena-hard-v0.1
|
PHP, CORS, and Server Requests
|
Assume the role of an API that provides a chart wizard feature.
Given a dataset with the following dimensions:
- Key: country, Label: Country, Units: null, DataType: text, PlotType: categorical
- Key: region, Label: Region, Units: null, DataType: text, PlotType: categorical
- Key: year, Label: Year, Units: null, DataType: date, PlotType: timeSeries
- Key: income, Label: Income per capita, Units: Inflation adjusted dollars, DataType: numeric, PlotType: continuous
- Key: population, Label: Population, Units: People, DataType: numeric, PlotType: discrete
- Key: lifeExpectancy, Label: Life Expectancy, Units: Years, DataType: numeric, PlotType: continuous
A user wants to create a chart with the following description (delimited by double tildes):
~~Life Expectency by region over time~~
Do not include any explanations, only provide a RFC8259 compliant JSON response containing a valid Vega Lite chart definition object.
Please give the chart a suitable title and description. Do not include any data in this definition.
The JSON response:
|
eng
|
53f4338f1a4b403c84ccf864c079a8c1
|
arena-hard-v0.1
|
E-Commerce Web Development
|
i want to create an online social marketplace with wordpress, please create a list of top 3 best themes, then create a list of plugins that essential, and finaly create a list of market entering strategye which can be use for Iran domestic market
|
eng
|
f51541e1565145bbbe143e14ea829031
|
arena-hard-v0.1
|
React Component Development
|
import FieldDropDown from "lib/hookForm/fieldDropDown"
import { ICompanyLogo } from "services/api/company/companyTypes"
import apiLoanQuery from "services/api/loan/apiLoanQuery"
interface IProps {
forcePlaceLoanGuid?: string
companyGuid?: string
}
export default function LoanLogoDropdown(props: IProps) {
const { data: companyLogos } = apiLoanQuery.useGetCompanyLogosInfoByLoanGuidQuery(props.forcePlaceLoanGuid)
if (!!!companyLogos) return null
const logoKeyValues = companyLogos.map((logo: ICompanyLogo) => ({
key: logo.portfolioIdentifier,
value: logo.logoDescription,
}))
return (
<FieldDropDown label="Company Logo" name="portfolioIdentifier" data={logoKeyValues} placeholder="Select Logo" labelColSize={3} inputColSize={9} />
)
}
|
eng
|
e80139e32fb54b95a9a4effa86d8ddf5
|
arena-hard-v0.1
|
Expert Analysis on Preppers
|
# Role
You are a world renown Certification Exam Psychometrician. Your job is to use the best practices in psychometrics and technical certification exams to generate 5 questions/distractors/correct_answers following the defined **Answer_Format** and **Guidelines**.
The question must be based on the provided data. Only use the provided **Dataset** to generate the questions.
# Answer_Format
You provide only the mentioned Variables. No explanation, no salutes, nothing other than the variables response.
{
Number = "n",
Question = "Technical Environment/Business Problem: part of the question that refers to **Technical Environment/Business Problem**. Goal Statement: Part of the question that refers to the **Goal Statement**. Question Sentence: Part of the question that refers to the **Question Sentence**",
Distractors = ["First Distractor", "Second Distractor", ..., "Last Distractor"],
Correct_Answers = ["First Correct Answer", "Second Correct Answer", ..., "Last Correct Answer"]
Correct_Reasoning = ["Reasoning on the first correct Answer", "Reasoning on the second correct Answer", ... , "Reasoning on the last correct Answer"]
}
# Guidelines
- You need to follow the Answer format to provide the answer.
- Each distractor and Correct_Answer should be about the same size.
## Question Rules
- Each question needs to have 3 parts. Each part have its own rules. Please follow the rules contained in each part. The parts are: **Technical Environment/Business Problem**, **Goal Statement**, and **Question Sentence**
### Technical Environment/Business Problem
- Describe from general to specific
- Include only necessary information; no extraneous text
- Questions must not provide cues or clues that will give away the correct answer to an unqualified candidate.
### Goal Statement
- Precise, clear, and logically connect to stem and answer choices
- Typically begins with “You need to…”
- Specify parameters for completing goal (e.g., lowest software cost,
least amount of time, least amount of coding lines/effort, etc.)
### Question Sentence
- Typically “What should you do?” or “What should you do next?”
- May incorporate text from answer choices where appropriate
- Example: If all answer choices are tools: “Which tool should you
install?”
- Should not be a negative question; i.e., “Which of the following is
NOT…”
## Distractor Rules
- Distractors are wrong answers to the provided questions.
- You need to provide 3 distractors.
- Distractors need to be somewhat believable answers.
- The correct_answ
|
eng
|
ee9ae71956724d4591d4d9bc457d598d
|
arena-hard-v0.1
|
CSV Data Manipulation in Pandas
|
%%writefile app.py
import streamlit as st
import pandas as pd
import io
import joblib
import base64
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
from sklearn import tree
from sklearn.tree import _tree
import numpy as np
# Function to upload and generate predictions
def upload_and_generate_predictions():
# File upload and prediction code
def get_base64(bin_file):
with open(bin_file, "rb") as f:
data = f.read()
return base64.b64encode(data).decode()
def set_background(png_file):
bin_str = get_base64(png_file)
page_bg_img = (
"""
<style>
.stApp {
background-image: url("data:image/png;base64,%s");
background-size: cover;
}
</style>
"""
% bin_str
)
st.markdown(page_bg_img, unsafe_allow_html=True)
set_background("Screenshot (29).png")
red_title = '<h1 style="color: white;">Equipment Failure Prediction</h1>'
# Display the red title using st.markdown
st.markdown(red_title, unsafe_allow_html=True)
# Display the custom CSS style
uploaded_file = st.file_uploader(
"Upload an Excel or CSV file", type=["xlsx", "csv"]
)
if uploaded_file is not None:
# Read the file into a DataFrame
if (
uploaded_file.type
== "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
): # Excel file
df = pd.read_excel(uploaded_file, engine="openpyxl")
else: # CSV file
df = pd.read_csv(uploaded_file)
# st.session_state.predictions_df = df
# st.session_state.uploaded_file=uploaded_file
# Display the first screen
if st.button("Generate predictions"):
model = joblib.load("des_tree_clss.joblib")
prediction = ""
if "machine_status" in df.columns.to_list():
prediction = model.predict(df.drop(columns=["machine_status"]))
else:
prediction = model.predict(df)
df["Predicted_Status"] = prediction
st.success("Predictions made successfully!")
st.session_state.predictions_df = df
st.session_state.uploaded_file = uploaded_file
# Display the modified DataFrame with predictions
# Save the DataFrame with predictions to st.session_state
# Move to the second screen (graph display)
def display_graph(predictions_df, uploaded_file):
def get_base64(bin_file):
with open(bin_file, "rb") as f:
data = f.read()
return base64.b64encode(data).decode()
def set_background(png_file):
bin_str = get_base64(png_file)
page_bg_img = (
"""
<style>
.stApp {
background-image: url("data:image/png;base64,%s");
background-size: cover;
}
</style>
"""
% bin_str
)
st.markdown(page_bg_img, unsafe_allow_html=True)
set_background("Screenshot (32).png")
st.markdown('<div style="margin-top: 50px;"></div>', unsafe_allow_html=True)
st.subheader("Early warning Signal:")
# Create a DataFrame with the first 10 records with prediction status 1
df_status_1 = predictions_df[predictions_df["Predicted_Status"] == 1].head(10)
# Create a DataFrame with all records with prediction status 0
df_status_0 = predictions_df[predictions_df["Predicted_Status"] == 0].head(10)
# Combine the DataFrames
df_combined = pd.concat([df_status_0, df_status_1])
start_timestamp = datetime.datetime(2023, 1, 1)
df_combined["Synthetic_Timestamp"] = pd.date_range(
start=start_timestamp, periods=len(df_combined), freq="T"
)
# df_combined['Synthetic_Timestamp'] = pd.date_range(start='2023-01-01', periods=len(df_combined), freq='T')
plt.figure(figsize=(10, 3))
sns.scatterplot(
x="Synthetic_Timestamp",
y="Predicted_Status",
hue="Predicted_Status",
marker="o",
s=200,
data=df_combined,
palette={1: "red", 0: "green"},
)
plt.xticks(rotation=45, ha="right")
# plt.title("Machine Status Prediction - Combined")
plt.xlabel("Timestamp")
plt.ylabel("Value")
st.pyplot()
# Create a download link
st.subheader("Download the File with Predictions:")
st.write("Download the File with Predictions:")
# st.markdown(title1, unsafe_allow_html=True)
modified_file_name = (
f"file_with_predictions_{uploaded_file.name}"
if uploaded_file.name
else "file_with_predictions.xlsx"
)
# Convert DataFrame to binary stream
modified_file = io.BytesIO()
if (
uploaded_file.type
== "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
): # Excel file
predictions_df.to_excel(modified_file, index=False, engine="xlsxwriter")
else: # CSV file
predictions_df.to_csv(modified_file, index=False)
modified_file.seek(0)
# Create a download link
st.download_button(
label="Download File with Predictions",
data=modified_file,
file_name=modified_file_name,
key="download_file_with_predictions",
)
# Rules functions
def get_rules(tree, feature_names, class_names):
tree_ = tree.tree_
feature_name = [
feature_names[i] if i != _tree.TREE_UNDEFINED else "undefined!"
for i in tree_.feature
]
paths = []
path = []
def recurse(node, path, paths):
if tree_.feature[node] != _tree.TREE_UNDEFINED:
name = feature_name[node]
threshold = tree_.threshold[node]
p1, p2 = list(path), list(path)
p1 += [f"({name} <= {np.round(threshold, 3)})"]
recurse(tree_.children_left[node], p1, paths)
p2 += [f"({name} > {np.round(threshold, 3)})"]
recurse(tree_.children_right[node], p2, paths)
else:
path += [(tree_.value[node], tree_.n_node_samples[node])]
paths += [path]
recurse(0, path, paths)
# sort by samples count
samples_count = [p[-1][1] for p in paths]
ii = list(np.argsort(samples_count))
paths = [paths[i] for i in reversed(ii)]
rules = []
for path in paths:
rule = "if "
for p in path[:-1]:
if rule != "if ":
rule += " and "
rule += str(p)
rule += " then "
if class_names is None:
rule += "response: " + str(np.round(path[-1][0][0][0], 3))
else:
classes = path[-1][0][0]
l = np.argmax(classes)
rule += f"class: {class_names[l]} (proba: {np.round(100.0*classes[l]/np.sum(classes),2)}%)"
rule += f" | based on {path[-1][1]:,} samples"
rules += [rule]
return rules
st.subheader("Model Explainability:")
model = joblib.load("des_tree_clss.joblib")
rules = get_rules(model, predictions_df.columns, range(2))
table_list = []
for r in rules:
colon_split = r.split(":")
col_1 = colon_split[0]
pipe_split = str(colon_split[1] + colon_split[2]).split("|")
# print(colon_split)
# print(pipe_split)
col_2 = pipe_split[0]
col_3 = pipe_split[1]
table_list.append([col_1, col_2, col_3])
table_df = pd.DataFrame(
table_list, columns=["rule_details", "class_probabilities", "samples_count"]
)
rules_data_file = io.BytesIO()
table_df.to_csv(rules_data_file, index=False)
rules_data_file.seek(0)
# Create a download link
st.download_button(
label="Model Explainability",
data=rules_data_file,
file_name="rules_data.csv",
key="download_rules_data",
)
# Run the app
if __name__ == "__main__":
st.set_option("deprecation.showPyplotGlobalUse", False)
st.set_page_config(page_title="Equipment Failure Prediction", page_icon="📈")
pages = ["Upload and Predict", "Graph and Download"]
page = st.sidebar.selectbox("Select a page", pages)
if page == "Upload and Predict":
upload_and_generate_predictions()
elif page == "Graph and Download":
if hasattr(st.session_state, "predictions_df"):
display_graph(
st.session_state.predictions_df, st.session_state.uploaded_file
)
else:
st.warning("Please upload a file on the 'Upload and Predict' page first.")
this is mu code inthis i have a scatterplot graph i want to modify the code in a way that draw ploltly graph usw click events of ploltly when i click the instance of the circle it should give descion rule for the instance using lime.after graph if i click one circle or instance lime table and rule list should print there it self you can add download predictions and model explananbility in new side bar
|
eng
|
15ad9bcf7527456098cbd1fa5734a312
|
arena-hard-v0.1
|
Innovative Online Market Models
|
Can you market size revenue that can earned by UK Management Consultancy by advising and implementing FinTech solutions to Capital Markets clients
|
eng
|
379a490a6eae40608abf3501807b2545
|
arena-hard-v0.1
|
Advanced Algebra and Number Theory
|
Consider the state:
$$\ket{\psi} = \frac{\ket{00} + \ket{01} + \ket{10}}{\sqrt{3}}$$
(a). Calculate the reduced density matrix of the second qubit of $\ket{\psi}$.
|
eng
|
990d35320cca477fa60c3df15f629364
|
arena-hard-v0.1
|
Weather Forecast Accuracy
|
The prediction is in the IF stage while updating is in the ID stage. Think about two
continuous branch instructions: the first one is in the ID stage, and the second is in the IF
stage. What is the order of updating the first result and querying the second prediction? How
to control the order? How do local-based and global-based prediction algorithms be affected
by the order?
|
eng
|
README.md exists but content is empty.
- Downloads last month
- -