QuestionId
stringlengths
8
8
AnswerId
stringlengths
8
8
QuestionBody
stringlengths
91
22.3k
QuestionTitle
stringlengths
17
149
AnswerBody
stringlengths
48
20.9k
76390292
76392654
Dates ranges that are found in rule1 but 'not in' rule2 E.g., item_no item_type active_from active_to ruleid 10001 SAR 2022-01-01 2023-05-31 rule1 10001 SAR 2023-07-01 2099-12-31 rule1 10001 SAR 2023-01-01 9999-12-31 rule2 10001 SAR 2020-12-01 2021-12-31 rule2 In this case output will be the date ranges which are in rule1 but not in rule2 is. 10001 SAR 2022-01-01 2022-12-31 I have used connect by level which is taking more time to generate dates and compare them.. as date end is 9999-12-31.
Oracle Date ranges that are present in parent rows but not in child rows
Adapting my answer to your previous question, from Oracle 12, you can UNPIVOT the dates and then use analytic functions and MATCH_RECOGNIZE to process the result set row-by-row to find the consecutive rows where rule1 is active and rule2 is inactive: SELECT item_no, item_type, active_from + CASE WHEN prev_rule2 > 0 THEN INTERVAL '1' SECOND ELSE INTERVAL '0' SECOND END AS active_from, active_to - CASE WHEN next_rule2 > 0 THEN INTERVAL '1' SECOND ELSE INTERVAL '0' SECOND END AS active_to FROM ( SELECT item_no, item_type, rule_id, dt, COALESCE( SUM(CASE rule_id WHEN 'rule1' THEN active END) OVER ( PARTITION BY item_no, item_type ORDER BY dt, ACTIVE DESC ), 0 ) AS rule1, COALESCE( SUM(CASE rule_id WHEN 'rule2' THEN active END) OVER ( PARTITION BY item_no, item_type ORDER BY dt, ACTIVE DESC ), 0 ) AS rule2 FROM table_name UNPIVOT ( dt FOR active IN ( active_from AS 1, active_to AS -1 ) ) ) MATCH_RECOGNIZE( PARTITION BY item_no, item_type ORDER BY dt, rule1 DESC, rule2 DESC MEASURES FIRST(dt) AS active_from, PREV(rule1) AS prev_rule1, PREV(rule2) AS prev_rule2, NEXT(dt) AS active_to, NEXT(rule1) AS next_rule1, NEXT(rule2) AS next_rule2 PATTERN ( active_rules+ ) DEFINE active_rules AS rule1 > 0 AND rule2 = 0 ) Which, for the sample data from your previous question: CREATE TABLE table_name (Item_no, item_type, active_from, active_to, rule_id) AS SELECT 10001, 'SAR', DATE '2020-01-01', DATE '2023-01-01', 'rule1' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2024-01-01', DATE '9999-12-31', 'rule1' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2020-05-01', DATE '2021-06-01', 'rule2' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2021-01-01', DATE '2021-02-01', 'rule2' FROM DUAL; Outputs: ITEM_NO ITEM_TYPE ACTIVE_FROM ACTIVE_TO 10001 SAR 2020-01-01 00:00:00 2020-04-30 23:59:59 10001 SAR 2021-06-01 00:00:01 2023-01-01 00:00:00 10001 SAR 2024-01-01 00:00:00 9999-12-31 00:00:00 And, for the sample data in this question: CREATE TABLE table_name (Item_no, item_type, active_from, active_to, rule_id) AS SELECT 10001, 'SAR', DATE '2022-01-01', DATE '2023-05-31', 'rule1' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2023-07-01', DATE '2099-12-31', 'rule1' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2023-01-01', DATE '9999-12-31', 'rule2' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2020-12-01', DATE '2021-12-31', 'rule2' FROM DUAL; Outputs: ITEM_NO ITEM_TYPE ACTIVE_FROM ACTIVE_TO 10001 SAR 2022-01-01 00:00:00 2022-12-31 23:59:59 And, for the sample data: CREATE TABLE table_name (Item_no, item_type, active_from, active_to, rule_id) AS SELECT 10001, 'SAR', DATE '2022-01-01', DATE '9999-12-31', 'rule1' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2022-01-01', DATE '2023-05-31', 'rule2' FROM DUAL; Outputs: ITEM_NO ITEM_TYPE ACTIVE_FROM ACTIVE_TO 10001 SAR 2023-05-31 00:00:01 9999-12-31 00:00:00 fiddle
76394383
76394393
I need to add small bracket at the start and end of a Python list (Python 3) My list is list1 = ['1','2','3'] List needed is [('1','2','3')] I know the solution may be simple, but I am not getting it. Thanks in advance Tried fstring and join
Python List formatting with small bracket
Here's one way to print a list in that formatting with a combination of an f-string and str.join: >>> list1 = ['1','2','3'] >>> print(f"[({','.join(repr(i) for i in list1)})]") [('1','2','3')] Note that repr(i) adds quotes around the strings so that they print as '1' instead of 1. Note also that this is converting the list into a particular string, not adding anything to the list. If you wanted to add the literal strings ( and ) to list1 you could do: >>> list1.insert(0, '(') >>> list1.append(')') >>> list1 ['(', '1', '2', '3', ')'] But parentheses themselves aren't objects that you can add to a list. If you want to construct an object that will render with parens, that's a tuple: >>> list1 = ['1','2','3'] >>> [tuple(list1)] [('1', '2', '3')]
76391866
76393314
I am an Emacs newbie transitioning from Neovim. I want something that will show me a list of all of my todos across all the files of the project. I've tried the following packages: hl-todo fixmee Both of them can highlight todos, and show a list of all the todos in the current buffer. But, what I need is a list of todos across all the files of the project and not just the current buffer. They do provide such functionality, but it is very clunky and impossible to bind to just a single key (they require choosing a path where to search and pressing enter). The question: how can I show a list of all my todos across all the files in my project?
How to show a list of all my todos across all the files of my project?
I'm not sure about todo modes, but using the rg (ripgrep) package, it is easy to add custom commands. For example, to define a command that searches for 'TODO' or 'FIXME' in the current project, (rg-define-search my-rg-todo :query "(TODO|FIXME)" :format regexp :dir project :files current) There are a lot of other similar solutions, like builtin rgrep, ag or another ripgrep/ag library.
76389311
76392934
I'm trying to configure a GitPod or GitHub Codespace so that the README.md gets automatically opened in preview mode at boot time. I managed to open the file as: code README.md but that opens the editor, I'd like to open it in preview only mode.
How can I open a Markdown file in preview mode by default in VS Code?
If you google "github vscode issues markdown open preview by default", you will probably find this issue ticket pretty high up in the search results: Option to automatically open markdown in preview #54776. Quoting Matt Bernier from their comment there: To change this, you can configure the default editor for .md files (or whatever other markdown file extension you wish). Here's an example setting: "workbench.editorAssociations": { "*.md": "vscode.markdown.preview.editor" } Use the [View: Reopen Editor With...] command to switch back to the standard text editor. There's also another related issue ticket about opening the Markdown preview to the side by default: Automatically Activate Markdown Preview #2766. If you're interested in that one too, give it a thumbs up to show support, and subscribe to it to get notified of discussion and progress, but please avoid noisy comments like "+1" or "bump".
76391982
76393363
Trying to build a dissector in lua where in the ProtoField for one of the values, I map it to a table so that when passed some uint it can get said string from the list of outputs. The current issue is that I have some indexes of 1, 2, and 3, but I also need a range of numbers that correspond to one output string, and its too many to hardcode it all in. I thought about just adding some check that will add said index whenever it doesn't exist, but that would result in alot of wasted time and wanted to ensure there wasn't a better way. Tried mapping it to a function and setting a default value with a metatable.
Is there a way to represent a range of indexes in a lua table for a dissector?
Generically, a table can contain tables as indices. These tables could each be used to represent a range of indices. A cursory example (note that overlapping ranges are not stable): local range_mt = {} range_mt.__index = range_mt function range_mt:contains(value) return self.lower <= value and value <= self.upper end local function range(lower, upper) return setmetatable({ lower = lower, upper = upper }, range_mt) end local map = setmetatable({ [1] = 'hello', [2] = 'world', [3] = 'goodbye', ranges = { [range(10, 20)] = 'foo', [range(51, 99)] = 'bar' } }, { __index = function (self, value) for r, str in pairs(self.ranges) do if r:contains(value) then return str end end return "DEFAULT" end }) for _, value in ipairs { 9, 6, 1, 2, 16, 4, 15, 66, 51, 3, 94 } do print(value, '->', map[value]) end 9 -> DEFAULT 6 -> DEFAULT 1 -> hello 2 -> world 16 -> foo 4 -> DEFAULT 15 -> foo 66 -> bar 51 -> bar 3 -> goodbye 94 -> bar
76391998
76393387
In a project with NodeJS and ExpressJS, using Sequelize, I'm attempting to implement server-side pagination and searching const products = await Product.findAll({ where: { [Op.and]: [ { status: { [Op.ne]: -1 }, },{ [Op.or]:[ { name:{ [Op.substring]:searchString } },{ category:{ [Op.substring]:searchString }, } ] } ] }, limit: rows, offset: offset }); Now this allows me to search in the name and category columns, but Product's table has like 10 columns, Client's table has like 15 columns, is there a way to get something like WHERE any-column LIKE "%searchString%" where:{ [Op.or]:[ {any-column}:{ [Op.substring]:searchString } ] } Or is my only option to add each column manually?
Sequelize - Is it possible to search all columns of a table with Op.or operator?
If all columns are string type, you can use CONCAT_WS function to search in a whole combined string. const searchCols = ['name', 'description', 'category'].map(sequelize.col); await Product.findAll({ where: { status: { [Op.ne]: -1 }, [Op.where]: Sequelize.where(Sequelize.fn('CONCAT_WS', ' ', ...searchCols), Op.like, `%${searchString}%`) } })
76394368
76394404
The command railway up takes your current local project and uploads it directly to railway without having to link a Github repo to your railway project. Does RailwayCLI take into account .gitignore file like Git does? if not what is the proper way to ignore files (not upload them) when using the command. I couldn't find anything relevant mentioned in their docs.
Does RailwayCLI ignore files defined in .gitignore?
No, RailwayCLI does not take into account the .gitignore file. If you want to ignore certain files when using the railway up command, you can use the --ignore-files flag. For example, to ignore all files with the .txt extension, you would use the following command: railway up --ignore-files .txt Update: The -ignore-files flag uses the same globbing patterns as .gitignore. So you can use the * character to match any number of characters, and the ? character to match any single character. For example, the following command will ignore all files that start with the word "app" and have any extension: railway up --ignore-files app* This command will ignore all files that have the .txt or .json extensions: railway up --ignore-files .txt,.json This command will ignore all files in the ./test directory and its subdirectories: railway up --ignore-files ./test/* You can also use the -ignore-files option to ignore specific files, even if they have extensions that are not listed in the .gitignore file. For example, the following command will ignore the file app.txt, even though the .txt extension is not listed in the .gitignore file: railway up --ignore-files app.txt
76394390
76394420
string <- "this is a funny cat" I want to replace the first 15 characters of string with 'orange`. The desired output is 'orange cat' However, using substr gives me substr(string, 1, 15) <- "orange" > string [1] "oranges a funny cat" which is not the desired output.
How to replace a string defined by starting and ending index by another string in R?
The output of substr should be the pattern of sub. string <- "this is a funny cat" sub(substr(string, 1, 15), "orange", string) [1] "orange cat" Or directly replace the first 15 characters in sub. sub("^.{15}", "orange", string) [1] "orange cat"
76392186
76393456
I have this visual annoyance when I am using VS Code. Whenever a method or class is declared, the "reference" counting is causing the vertical lines that connects the curly braces to break. It is very annoying, I tried finding the settings in user preference but could not find anything. Could someone help me? I find the "reference" count is alright, but how can I keep the vertical curly braces pair lines from breaking. I don't have this problem in Visual Studio.
How can I stop codelenses from breaking up indent guides in VS Code?
This is a known issue and as far as I know, there's not much you can do about it for now except wait for it to get handled. If you google "github vscode issues codelens indent guide", you should easily find Indent Guides Have Breaks Where CodeLens UI is Rendered #9604. There, you'll see that one of the maintainers, @alexdima commented: Code lenses are implemented as view zones (same mechanism as embedded editors when you find all references or as the diff editor). In some of the view zones (such as code lens) it makes sense to render the indent guides inside view zones, while in others (such as the embedded editors or the diff editor) it does not, therefore marking this as both a feature request and a bug. You can give that issue ticket a thumbs up reaction to show support for it getting prioritized (there's a big backlog), and subscribe to it to get notified about discussions and updates. But please don't make noisy comments like "me too" / "+1" / "bump".
76389877
76392983
I have Spring Boot 3.0 based project, Kotlin and Micrometer Tracing (which superseded Spring Cloud Sleuth) Trying to connect Micrometer tracing to OTLP collector, which is part of Jaeger. The configuration class: @Configuration class OpenTelemetryConfiguration( @Value("\${otel.exporter.otlp.traces.endpoint:http://localhost:4317}") private val tracesEndpoint: String ) { @Bean fun spanExporter(): SpanExporter = OtlpGrpcSpanExporter.builder().setEndpoint(tracesEndpoint).build() @Bean fun jaegerPropagator(): TextMapPropagator = JaegerPropagator.getInstance() } Dependencies in gradle: implementation("io.micrometer:micrometer-core:1.11.0") implementation("io.micrometer:micrometer-tracing:1.1.1") implementation("io.micrometer:micrometer-registry-prometheus:1.10.5") implementation("io.micrometer:micrometer-tracing-bridge-otel:1.1.1") implementation("io.opentelemetry:opentelemetry-sdk:1.26.0") implementation("io.opentelemetry:opentelemetry-sdk-extension-autoconfigure-spi:1.26.0") implementation("io.opentelemetry:opentelemetry-exporter-common:1.26.0") implementation("io.opentelemetry:opentelemetry-exporter-otlp:1.26.0") The error when application starts: Failed to instantiate [io.opentelemetry.sdk.trace.export.SpanExporter]: Factory method 'spanExporter' threw exception with message: io/opentelemetry/exporter/internal/otlp/OtlpUserAgent at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:171) at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) ... 170 common frames omitted Caused by: java.lang.NoClassDefFoundError: io/opentelemetry/exporter/internal/otlp/OtlpUserAgent at io.opentelemetry.exporter.otlp.trace.OtlpGrpcSpanExporterBuilder.<init>(OtlpGrpcSpanExporterBuilder.java:48) at io.opentelemetry.exporter.otlp.trace.OtlpGrpcSpanExporter.builder(OtlpGrpcSpanExporter.java:40) at com.logindex.geoservice.configuration.OtelConfiguration.spanExporter(OtelConfiguration.kt:19) ... 171 common frames omitted Caused by: java.lang.ClassNotFoundException: io.opentelemetry.exporter.internal.otlp.OtlpUserAgent at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) What dependency is missing there? there's no such class io.opentelemetry.exporter.internal.otlp.OtlpUserAgent in the library io.opentelemetry:opentelemetry-exporter-otlp:1.26.0
Micrometer Tracing, Spring Boot 3.0, OTLP exporter class not found error
OTel is not stable yet so not every version of the OTel SDK is compatible with every version of Micrometer Tracing due to breaking changes in the OTel SDK. You should delete all of your version definitions and let the Spring Boot BOM define versions for you, this is what you need: implementation 'org.springframework.boot:spring-boot-starter-actuator' implementation 'io.micrometer:micrometer-registry-prometheus' implementation 'io.micrometer:micrometer-tracing-bridge-otel' implementation 'io.opentelemetry:opentelemetry-exporter-otlp' Btw Jaeger also supports Zipkin so you can use Brave too with the Zipkin exporter.
76392156
76393483
I have a table contain reference time and a user check-in time. Both data is different in type. Sample data dtime = 2023-06-02 08:23:21 work_time = 08:00-18:00 And my code is... SELECT substring(dtime,-8,5) AS chkin, SUBSTRING(work_time, 1, 5) AS wt1, TIMESTAMPDIFF(MINUTE, substring(dtime,-8,5), SUBSTRING(work_time, 1, 5)) AS min_diff FROM ta_db WHERE id = 13181; As a result... chkin = 08:23 wt1 = 08:00 Now, I want to know how many minutes different from chkin and wt with TIMESTAMPDIFF. So I did this... TIMESTAMPDIFF(MINUTE, substring(dtime,-8,5), SUBSTRING(work_time, 1, 5)) AS min_diff But it returns NULL. Please be advised.
How to find different between two times with sql
Try this: SET @dtime = '2023-06-02 08:28:21'; SET @work_time = '08:00-18:00'; SELECT substring(@dtime,-8,5) AS chkin, SUBSTRING(@work_time, 1, 5) AS wt1, TIMESTAMPDIFF(MINUTE, @dtime, CONCAT(SUBSTRING(@dtime, 1,10), ' ', SUBSTRING(@work_time, 1, 5))) AS min_diff
76392180
76393491
I have a UIStackView that contains multiple arranged subviews. I want to dynamically hide certain views within the stack view when auto layout attempts to reduce the stack view's size. I'm looking for a way to achieve this behavior using auto layout and without manually manipulating the frame. I've tried setting the isHidden property of the subviews, but it doesn't seem to update the layout of the stack view correctly. The views still take up space within the stack view. What would be the recommended approach to hide views in a stack view when auto layout attempts to reduce its size? Are there any specific methods or techniques that should be used to achieve this behavior? Any guidance or code examples would be greatly appreciated. Thank you! I've try to override layoutSubviews method. override func layoutSubviews() { super.layoutSubviews() for view in stackView.arrangedSubviews { if visibilityManager.shouldBeVisible(view: view) { if let view = view as? MyButton { if view.intrinsicContentSize.height > view.bounds.height { view.isHidden = true } else { view.isHidden = false } } } } } Expect: The views in the stack view will be hidden Actual: Views are just compressed
Hide views in a stack view when auto layout attempts to reduce the stack view size?
When we set .isHidden = true on a stack view's arranged subview, that subview remains in the .arrangedSubviews collection but it is removed from the hierarchy and no longer has a valid frame. Another approach would be to set the .alpha to either 1.0 or 0.0 to "show / hide" the view. We create a custom view subclass - let's call it AutoHideSubviewsView. It will have a stack view with Top / Leading / Trailing constraints, but no Bottom constraint. When the view frame changes - gets shorter or taller - we loop through the arranged subviews and: get the frame convert it to the view coordinate space (it's relative to the stack view itself) if the view's bounds contains the frame, set its .alpha = 1.0 (show it) else, set its .alpha = 0.0 (hide it) Here's some quick example code... Custom View class AutoHideSubviewsView: UIView { let stackView: UIStackView = { let v = UIStackView() v.axis = .vertical v.spacing = 12.0 v.translatesAutoresizingMaskIntoConstraints = false return v }() override init(frame: CGRect) { super.init(frame: frame) commonInit() } required init?(coder: NSCoder) { super.init(coder: coder) commonInit() } private func commonInit() { addSubview(stackView) NSLayoutConstraint.activate([ stackView.topAnchor.constraint(equalTo: topAnchor, constant: 8.0), stackView.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 8.0), stackView.trailingAnchor.constraint(equalTo: trailingAnchor, constant: -8.0), // NO Bottom constraint ]) // not strictly necessary, but let's do this anyway self.clipsToBounds = true } override func layoutSubviews() { super.layoutSubviews() // on first layout, the stackView's subview's frames are not set // so force another layout pass if stackView.arrangedSubviews.first!.frame.height == 0.0 { stackView.setNeedsLayout() stackView.layoutIfNeeded() self.setNeedsLayout() self.layoutIfNeeded() return } for v in stackView.arrangedSubviews { // the frames of the arranged subviews are // relative to the stack view frame, so // we want to convert the frames in case // the stack view TOP is not Zero let r = stackView.convert(v.frame, to: self) // animate the alpha change so we can see it UIView.animate(withDuration: 0.3, animations: { v.alpha = self.bounds.contains(r) ? 1.0 : 0.0 }) } } } Example Controller class ViewController: UIViewController { let ahView = AutoHideSubviewsView() var hConstraint: NSLayoutConstraint! override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .systemBackground // grow and shrink buttons var cfg = UIButton.Configuration.gray() cfg.title = "Shrink" let btn1 = UIButton(configuration: cfg, primaryAction: UIAction() { _ in if self.hConstraint.constant > 15.0 { self.hConstraint.constant -= 10.0 } }) cfg.title = "Grow" let btn2 = UIButton(configuration: cfg, primaryAction: UIAction() { _ in self.hConstraint.constant += 10.0 }) [ahView, btn1, btn2].forEach { v in v.translatesAutoresizingMaskIntoConstraints = false view.addSubview(v) } let g = view.safeAreaLayoutGuide NSLayoutConstraint.activate([ ahView.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0), ahView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0), ahView.widthAnchor.constraint(equalToConstant: 200.0), btn1.topAnchor.constraint(equalTo: g.topAnchor, constant: 40.0), btn1.leadingAnchor.constraint(equalTo: ahView.trailingAnchor, constant: 20.0), btn1.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0), btn2.topAnchor.constraint(equalTo: btn1.bottomAnchor, constant: 20.0), btn2.leadingAnchor.constraint(equalTo: ahView.trailingAnchor, constant: 20.0), btn2.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0), ]) // start with the view height at 320 hConstraint = ahView.heightAnchor.constraint(equalToConstant: 320.0) hConstraint.isActive = true // let's add 3 labels and 3 buttons to the custom view's stackView let strings: [String] = ["First", "Second", "Third"] let colors: [UIColor] = [.cyan, .green, .yellow] for (str, c) in zip(strings, colors) { let label = UILabel() label.font = .systemFont(ofSize: 24, weight: .light) label.text = str label.textAlignment = .center label.backgroundColor = c label.heightAnchor.constraint(equalToConstant: 40.0).isActive = true var cfg = UIButton.Configuration.filled() cfg.title = str let btn = UIButton(configuration: cfg, primaryAction: UIAction() { _ in print("\(str) Button Tapped") }) ahView.stackView.addArrangedSubview(label) ahView.stackView.addArrangedSubview(btn) } // so we can see the framing ahView.backgroundColor = .red } } Looks like this (custom view's background is red): Tapping the "Shrink" button will decrease the height of the view, tapping the "Grow" button will increase it. To make it really easy to see what's happening, I animate the .alpha change so the subviews "fade" in and out. Here's an animation (too big to post here): https://imgur.com/a/glRRh2O
76390199
76392996
The following order of events should happen in my Oracle APEX app: User clicks Submit button in a modal dialog. The configured validations are executed (mostly package functions). If validation is OK, the page is submitted (a package function is called). If no validation error or exception during submit happened, Javascript is executed. This Javascript needs access to the item values entered by the user, so it must not be executed after a page reload. Modal dialog is closed. I tried many ways to accomplish this, but all failed. The obvious solution would be two dynamic actions on the button click, but I learned that the order of the actions is not guaranteed, and the Javascript is executed even after an error in the validation or submit. Now I think I have to do it with Javascript in a single dynamic action (button click on the modal dialog) like this: // Validate and submit. apex.page.submit( { validate: true, } ); // Other JS code on successful submit, which is accessing page items. ... $v("myItem") ... // Close modal dialog. apex.navigation.dialog.close(true); How can I check if apex.page.submit was successful? And will the javascript be able to access item values entered by the user? By the way, I don't want to put this Javascript on the parent page, because there will be multiple such modal dialogs, and I prefer to keep their respective logic separate, and not turn the parent page into a "God object". Thank you in advance.
How to validate and submit a modal dialog, and, when succesful, execute some Javascript
This isn't an answer to your question, because I believe what you want is simply not possible. To understand why, you need to check how APEX processes work. A number of processes run in a pre-rendering phase. These are server-side processes (computations, pl/sql processes, form initialization, etc). The dom is rendered based on the components. These components (optionally) take data from the pre-rendering process. In this phase client-side actions can be executed (dynamic actions, custom javascript) Once a page is submitted, the "processing" starts. The form data entered in the previous phase it sent to the server. From this point onward everything is server-side: validations, page processing, branches, etc). Now... javascript is client side only. and only in phase 2 can client-side code be executed. Once a page is submitted, the client can no longer be accessed. So "execute javascript" after "server side validations" is simply not possible in APEX. javascript can be executed onload OR when the page is rendered via an event OR before page submit. That's it in the current version of apex (23.1).
76394381
76394421
I am having a problem when I use react hook form + zod in my application, in short the inputs never change value and I get the following error in the console: Warning: Function components cannot be given refs. Attempts to access this ref will fail. Did you mean to use React.forwardRef()? my components: // Login.tsx import { useForm } from "react-hook-form"; import { zodResolver } from "@hookform/resolvers/zod"; import { z } from "zod"; import { Button } from "../components/Form/Button"; import { Input } from "../components/Form/Input"; const loginFormValidationSchema = z.object({ username: z.string().min(3), password: z.string().min(3) }); type LoginFormFields = z.infer<typeof loginFormValidationSchema>; export function Login() { const { register, handleSubmit, formState: { errors, isSubmitting } } = useForm<LoginFormFields>({ resolver: zodResolver(loginFormValidationSchema) }); function handleSignIn(data: LoginFormFields) { console.log(data); } return ( <section> <h1>Login</h1> <form onSubmit={handleSubmit(handleSignIn)}> <Input type="text" error={errors.username?.message} autoComplete="off" {...register("username")} /> <Input type="password" label="Senha" error={errors.password?.message} {...register("password")} /> <Button type="submit" disabled={isSubmitting}> Entrar </Button> </form> </section> ); } // Input.tsx import { InputHTMLAttributes } from "react"; import styles from "./Input.module.css"; interface InputProps extends InputHTMLAttributes<HTMLInputElement> { name: string; label?: string; error?: string; } export function Input({ name, label, error, ...props }: InputProps) { return ( <div className={styles.wrapper}> {!!label && ( <label htmlFor={name} className={styles.label}> {label} </label> )} <input id={name} name={name} className={styles.input} {...props} /> {!!error && <p className={styles.error}>{error}</p>} </div> ); } I have tried some alternatives that I found in some threads, but so far I have not been successful, has anyone experienced at least or has any notion of how it is possible to get around this error?
React Hook Form Error on custom Input component
I think what's happening is the {...register("username")}, which gets picked up as ...props in the Input component's props, is using a ref under the hood (this is how the react-hook-form library identifies inputs). You should be able to fix this by converting the Input component to use a forwardRef like this: export const Input = React.forwardRef<HTMLInputElement, Omit<InputProps, "ref">>(({ name, label, error, ...props }, ref) => ( <div className={styles.wrapper}> {!!label && ( <label htmlFor={name} className={styles.label}> {label} </label> )} <input id={name} name={name} className={styles.input} ref={ref} {...props} /> {!!error && <p className={styles.error}>{error}</p>} </div> )); Input.displayName = "Input"; Basically what this does is it allows you to put a ref attribute on the Input component, which gets 'forwarded' to the input component. This allows the library to monitor changes.
76391609
76393867
I am making multiple ajax calls as shown below. Below code works fine If all the calls succeed. But, let's say urlId 3 and 4 failed for some reason. Is it possible to get all the failed urlId's in the fail function? var urlId = [1, 3, 4, 7] let requests = []; for (let i = 0; i < urlId.length; i++) { requests.push($.ajax(...)); } $.when.apply($, requests).done(function () { $.each(arguments, function (idx, args) { //process args : urlId[idx] }); }).fail(function (jqXHR) { //how to get urlId's failed });
Multiple AJAX calls ; get all the failed calls
You would not be able to reliably get all of the failed IDs in the .fail handler because it will fire as soon as any of the deferreds becomes rejected, regardless of the state of the other deferreds. If you want to keep track of which requests succeeded and which failed, I think your best option would be to attach a catch handler to each $.ajax call that will catch a failure and map it to an object that has a reference to the id and to the success/failure status. Catching errors this way will mean that all of the deferreds will succeed, so the handling of success and failed states will then need to be performed in the done handler attached to the $.when. For example: var urlId = [1, 3, 4, 7] let requests = []; urlId.forEach(id => { requests.push( $.ajax(/*...*/) .then(response => { return { id, success: true, response }; }) .catch(error => { return { id, success: false, error } }) ); }); $.when.apply($, requests) .done(function (values) { $.each(arguments, function (idx, obj) { console.log(`${obj.id}: success = ${obj.success}`); }); }); Here is a fiddle for reference.
76392242
76393891
I have a singleton class: class Singleton { private static instance: Singleton; private constructor() { // Private constructor to prevent instantiation outside the class } public static getInstance(): Singleton { if (!Singleton.instance) { Singleton.instance = new Singleton(); } return Singleton.instance; } someMethod(): void { console.log("Singleton method called"); } } I know there is a type InstanceType, but it doesn't work with private constructors. // Cannot assign a 'private' constructor type to a 'public' constructor type type SingletonType = InstanceType<typeof Singleton>; Is it possible to create a custom type that returns instance type of classes with private constructors? EDIT Well, I'll try to shed some light on the whole situation. I hope this clears up misunderstandings. As I wrote earlier I have a singleton class with a private constructor. I have to pass this singleton to constructor of another generic class as a parameter: class BaseEntity<T extends typeof Singleton = typeof Singleton> { private singletonInstance: InstanceType<T>; constructor(instance: InstanceType<T>) { this.singletonInstance = instance; } } As far as Singleton class has a private constructor, I get an error Cannot assign a 'private' constructor type to a 'public' constructor type when trying to get its instance InstanceType<T>. So my question is, is it possible to create a custom generic type that accepts classes with "private" constructors and returns an instance type similar to "InstanceType"?
Get the instance type of a singleton class in TypeScript
TypeScript assumes that class constructors have a prototype property whose type is the same as the class instance type. This isn't actually true in practice, since generally speaking class fields won't actually be present on the prototype. But TypeScript makes that assumption as an approximation intentionally, and has declined suggestions to change this (see microsoft/TypeScript#11558 and microsoft/TypeScript#20922 for examples). And so, if the InstanceType<T> utility type is unavailable to you because of a private constructor, you can get the same information by indexing into the constructor's type with "prototype": type AlsoInstanceType<T extends { prototype: any }> = T["prototype"]; type SingletonType = AlsoInstanceType<typeof Singleton> // type SingletonType = Singleton class Foo { a = 1 } type Example = AlsoInstanceType<typeof Foo> // type Example = Foo; And thus your BaseEntity can be written that way: class BaseEntity<T extends typeof Singleton = typeof Singleton> { private singletonInstance: AlsoInstanceType<T>; constructor(instance: AlsoInstanceType<T>) { this.singletonInstance = instance; } } For Singleton in particular you could write ReturnType<typeof Singleton.getInstance> using the ReturnType<T> utility type: type AlsoSingletonType = ReturnType<typeof Singleton.getInstance>; // type AlsoSingletonType = Singleton type SingletonInstance<T extends typeof Singleton> = ReturnType<T["getInstance"]>; type AlsoAlsoSingletonType = SingletonInstance<typeof Singleton>; // type AlsoAlsoSingletonType = Singleton But T["prototype"] is more flexible. Playground link to code
76394041
76394431
I am attaching some code in here, I have recently added in a link to my portfolio website and the paragraph link is doing something weird, it seems to be aligning funky and I'm not sure what the issue is. I'm wondering if it might be stemming from some other a stylings in the CSS (like nav_links) but I'm unsure how to correct this without messing up the other a stylings. I added a class to the a href and tried to add in some CSS styling for it, it fixed some issues (like the size and color)but that gap
Why is this one paragraph link showing up funky?
You're making a mistake in your CSS, doing things like this: .nav_link li, a You need to remove those commas. I assume here you're trying to style an a within an li within an element with a class of .nav_link. But that's NOT what you're doing. Instead, this selector is applying a bunch of styles to .nav_link li and separately to all a elements. The selector should actually read like so: .nav_link li a You have this issue at several points in your CSS.
76391772
76393963
I'm working on a TypeScript project and I have a generic class AbstractDAL with a generic type parameter T_DEFAULT_RETURN. Inside the AbstractDAL class, I'm trying to extract a nested type that is specified within the T_DEFAULT_RETURN generic type parameter, but I'm facing some challenges. Here's the simplified code structure I have: class A { alpha() { } }; class B extends A { beta() { } }; abstract class AbstractDAL< T_DEFAULT_RETURN extends BaseEntity = BaseEntity, T_DATA = T_DEFAULT_RETURN extends BaseEntity<infer D> ? D : never > { get result() { return {} as T_DATA } } class BaseEntity< T_DATA extends A = A > { } class TestDAL extends AbstractDAL<TestEntity> { delta() { this.result.alpha // should also be beta, not just alpha } } class TestEntity extends BaseEntity<B> { } In the above code, the AbstractDAL class is defined with a generic type parameter T_DEFAULT_RETURN, and I'm trying to extract a nested type from this parameter. I have used a conditional type with infer and a helper type T_DATA to accomplish this. However, the inferred type for T_DATA is A instead of the expected type B. Is there a way to correctly extract the nested type B from the T_DEFAULT_RETURN generic type parameter within the AbstractDAL class? If so, what modifications are needed in the code to achieve this?
Extracting a nested type from the generic parameter of a generic class in TypeScript
The problem is that BaseEntity<T> does not depend on T structurally, and thus inference from BaseEntity<T> to T is, at best, unreliable. TypeScript's type system is largely structural and not nominal. That means types are compared by their structure or shape, and not by what they are named or where they are declared. If type X and type Y are both object types with the same members, then they are the same type: interface X { w: string; z: number; } let x: X = { w: "abc", z: 123 }; interface Y { w: string; z: number; } let y: Y = { w: "def", z: 456 }; x = y; // okay y = x; // okay In the above it doesn't matter whether you use the name X or the name Y to refer to the type. They are interchangeable. This extends to generics as well. If you have a generic type that doesn't use its type parameter inside the type definition, like interface F<T> { w: string; z: number } then all types you create by specifying T with a type argument will be identical to each other, and thus interchangeable: let fs: F<string> = x; // okay let fn: F<number> = y; // okay fs = fn; // okay fn = fs; // okay There is no difference between F<string> and F<number>, or between either of them and X or Y. We don't care about the name F<string>, just the shape { w: string; z: number }. And that's indicative of a problem in the code. Type inference, such as what you get when you use infer in a conditional type, can only work consistently if it is inferring from structure. Asking TypeScript "given G<T>, what type is T", the compiler cannot necessarily answer correctly. It depends strongly on the definition of G. For example, given our F<T> definition above, where F<T> is identical to X for all T, then the information you want is essentially gone. If I asked you to tell me which T makes F<T> equal to { w: string; z: number }, there's no principled away to answer. Indeed any answer is equally correct. There's no principled way to say that a particular { w: string; z: number } came from F<string> vs F<number>. Sometimes you'll find that the compiler does infer from names, but you can't rely on it. So, in general, you should never have unused type parameters. See the TypeScript FAQ entry Why doesn't type inference work on this interface: interface Foo<T> { }?. The solution is therefore to add some structure to your type that depends on the generic type parameter. A BaseEntity<T> should have something to do with T! For example, if you add a property of type T, everything starts working: class BaseEntity<D extends A = A> { declare d: D // <-- make it structurally dependent } class TestDAL extends AbstractDAL<TestEntity> { delta() { this.result.alpha //^? (property) AbstractDAL<TestEntity, B>.result: B } } class TestEntity extends BaseEntity<B> { } That declare field is just me telling the compiler that a BaseEntity<D> has a field of type D at key d without having to actually initialize it. In practice you should always initialize your class fields. The compiler will allow you to write new BaseEntity<string>().d.toUpperCase() but it will explode at runtime if d wasn't initialized. I'm more worried about typings than initialization. That's just an example. You should try to give BaseEntity<T> some structural dependence on T that makes sense for the actual use case, and only resort to a "phantom" property like declare d if you really don't have anything better. Playground link to code
76390366
76393192
I am working on multi filter checkbox in react and redux. How to add extra price filter logic when: The price range is less than 250 The price range is between 251 and 450 The price range is greater than 450 Below is the code for the filter reducer. Tried this if else condition but the problem is if multiple checkboxes are clicked this doesn't work case actionTypes.FILTER_PRODUCT: const searchParameters = data && Object.keys(Object.assign({}, ...data)); let filterData; console.log('test',action.payload); const searchQuery = action.payload.map((val) => val.toLowerCase()); const getItemParam = (item, parameter) => (item[parameter] ?? "").toString().toLowerCase(); if (action.payload.length === 0) { filterData = state.products; } else if(action.payload.includes('0-Rs250')){ filterData = state.products.filter((item) => { return item.price <= 250 }) } else if(action.payload.includes('Rs251-450')){ data = state.products.filter((item) => { return item.price > 250 && item.price <= 450 }) }else if(action.payload.includes('Rs 450')){ filterData = state.products.filter((item) => { return item.price > 450 }) } else { filterData = state.products.filter((item) => { return searchParameters.some((parameter) => { const itemParam = getItemParam(item, parameter); return searchQuery.some( (color) => itemParam === color) }); }); } console.log('test1',filterData); return { ...state, filteredData: filterData, };
How can I include price filter logic in a React filter?
Change the conditions from exclusive OR to inclusive OR by changing from if - else if - else to if - if - if. Example: Start with initially empty filtered result set, and for each filtering criteria filter and append the original state.products array. case actionTypes.FILTER_PRODUCT: ... let filteredData = []; ... if (action.payload.length) { if (action.payload.includes("0-Rs250")) { filteredData.push(...state.products.filter((item) => { return item.price <= 250; })); } if (action.payload.includes("Rs251-450")) { filteredData.push(...state.products.filter((item) => { return item.price > 250 && item.price <= 450; })); } if (action.payload.includes("Rs 450")) { filteredData.push(...state.products.filter((item) => { return item.price > 450; })); } } else { filteredData = state.products.filter(/* filter by searchParams */); } return { ...state, filteredData, }; An alternative using a more inline solution: case actionTypes.FILTER_PRODUCT: ... const filteredData = state.products.filter((item) => { if (action.payload.length) { return ( (action.payload.includes("0-Rs250") && item.price <= 250) || (action.payload.includes("Rs251-450") && item.price > 250 && item.price <= 450) || (action.payload.includes("Rs 450") && item.price > 450) ); } return /* filter by searchParams */; }); return { ...state, filteredData, }; This second method retains the original order of products from the source array.
76390529
76393668
I have some content in tinymce 6 editor. I moved the cursor and placed it inside the content and executed tinymce.activeEditor.insertContent(`<span class='dummy'> <div><span style="font-style:italic;font-weight:bold;text-decoration:underline;">This Text is BOLD</span></div> <div><span style="font-style:italic;">This text is Italic</span></div> <div><span style="text-decoration:underline;">This Text is&#160; Underlined</span></div> </span>`, {format: 'html'}); When I execute this the div elements are getting removed. I can see the content inside the BeforeSetContent event. But on the SetContent event the content is getting modified and the div are removed. Is there any way to prevent this behaviour ? I tried adding valid_children: '+div[span], +span[div]' in the editor config. I am expecting the html content to be added into the tinymce 6 editor without modifying the tags
TinyMCE insertHTML modifies some HTML tags
The default behavior is to remove certain elements, such as <div>, when they are inserted using the insertContent method. But you can override this behavior by customizing the editor's schema and adding a rule to allow the <div> element: tinymce.init({ selector: '#your-selector', // other configurations... setup: function (editor) { editor.on('BeforeSetContent', function (e) { // Allow the <div> element in the content e.content = e.content.replace(/<div>/g, '<div data-mce-bogus="1">'); }); }, valid_elements: 'div[*]' // Allow all attributes on <div> elements });
76394423
76394444
I was going through a course in OpenAI's API using an in-browser jupyter notebook page but wanted to copy some example code from there into a local IDE. I installed Python and the jupyter extention in VS Code and the OpenAI library. My code is below: import openai import os # from dotenv import load_dotenv, find_dotenv # _ = load_dotenv(find_dotenv()) # read local .env file openai.api_key = "my api key is here" def get_completion(prompt, model="gpt-3.5-turbo"): messages = [{"role": "user", "content": prompt}] response = openai.ChatCompletion.create( model=model, messages=messages, temperature=0, # this is the degree of randomness of the model's output ) return response.choices[0].message["content"] prompt = f""" Determine whether each item in the following list of \ topics is a topic in the text below, which is delimited with triple backticks. Give your answer as list with 0 or 1 for each topic.\ List of topics: {", ".join(topic_list)} Text sample: '''{story}''' """ response = get_completion(prompt) print(response) I installed Python and imported the openai library. When I run I am getting the error: APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) I'm assuming that's because I commented out lines 3 and 4 in the code because I am unsure what they do and do not know how to use the dotenv library. Is it simple to set this up just to make a basic call to the openai API? That's all I'm trying to do with this code right now.
Do I need any environment variables set to execute some code, call openai's api, and return a response?
Usually, you load your API KEY from your .env file but, as you are hardcoding it, you don't need anything else. The error you are getting might be related to the absence of the topic_list and story definitions.
76385353
76393734
I am trying to test a SQLAlchemy 2.0 repository and I am getting the error: sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "professions_name_key" So, although I am mocking the test, it inserts data into the database. What should I do to the test not insert data into the database? I am using pytest-mock. Here is the SQLAlchemy model # File src.infra.db_models.profession_db_model.py import uuid from sqlalchemy import Column, String from sqlalchemy.orm import Mapped, mapped_column from sqlalchemy.dialects.postgresql import UUID from src.infra.db_models.db_base import Base class ProfessionsDBModel(Base): """ Defines the professions database model. """ __tablename__ = "professions" profession_id: Mapped[uuid.UUID] = mapped_column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) name: Mapped[str] = mapped_column(String(80), nullable=False, unique=True) description: Mapped[str] = mapped_column(String(200), nullable=False) Here is the repository: # File src.infra.repositories.profession_postgresql_repository.py from typing import Dict, Optional import copy import uuid from src.domain.entities.profession import Profession from src.interactor.interfaces.repositories.profession_repository \ import ProfessionRepositoryInterface from src.domain.value_objects import ProfessionId from src.infra.db_models.db_base import Session from src.infra.db_models.profession_db_model import ProfessionsDBModel class ProfessionPostgresqlRepository(ProfessionRepositoryInterface): """ Postgresql Repository for Profession """ def __init__(self) -> None: self._data: Dict[ProfessionId, Profession] = {} def __db_to_entity(self, db_row: ProfessionsDBModel) -> Optional[Profession]: return Profession( profession_id=db_row.profession_id, name=db_row.name, description=db_row.description ) def create(self, name: str, description: str) -> Optional[Profession]: session = Session() profession_id=uuid.uuid4() profession = ProfessionsDBModel( profession_id=profession_id, name=name, description=description ) session.add(profession) session.commit() session.refresh(profession) if profession is not None: return self.__db_to_entity(profession) return None Here is the test: import uuid import pytest from src.infra.db_models.db_base import Session from src.domain.entities.profession import Profession from src.infra.db_models.profession_db_model import ProfessionsDBModel from .profession_postgresql_repository import ProfessionPostgresqlRepository from unittest.mock import patch def test_profession_postgresql_repository(mocker, fixture_profession_developer): mocker.patch( 'uuid.uuid4', return_value=fixture_profession_developer["profession_id"] ) professions_db_model_mock = mocker.patch( 'src.infra.db_models.profession_db_model.ProfessionsDBModel') session_add_mock = mocker.patch.object( Session, "add" ) session_commit_mock = mocker.patch.object( Session, "commit" ) session_refresh_mock = mocker.patch.object( Session, "refresh" ) repository = ProfessionPostgresqlRepository() repository.create( fixture_profession_developer["name"], fixture_profession_developer["description"] ) assert session_add_mock.add.call_once_with(professions_db_model_mock) assert session_commit_mock.commit.call_once_with() assert session_refresh_mock.refresh.call_once_with(professions_db_model_mock)
SQLAlchemy 2.0 mock is inserting data
This solution don't need to pass the session as parameter. The solution was to mock the Session and not its methods separately. As an advantage, the test is more concise now! import uuid import pytest from src.domain.entities.profession import Profession from src.infra.db_models.profession_db_model import ProfessionsDBModel from .profession_postgresql_repository import ProfessionPostgresqlRepository def test_profession_postgresql_repository( mocker, fixture_profession_developer ): mocker.patch( 'uuid.uuid4', return_value=fixture_profession_developer["profession_id"] ) professions_db_model_mock = mocker.patch( 'src.infra.repositories.profession_postgresql_repository.\ ProfessionsDBModel') session_mock = mocker.patch( 'src.infra.repositories.profession_postgresql_repository.Session') professions_db_model = ProfessionsDBModel( profession_id = fixture_profession_developer["profession_id"], name = fixture_profession_developer["name"], description = fixture_profession_developer["description"] ) professions_db_model_mock.return_value = professions_db_model repository = ProfessionPostgresqlRepository() result = repository.create( fixture_profession_developer["name"], fixture_profession_developer["description"] ) profession = Profession( professions_db_model_mock.return_value.profession_id, professions_db_model_mock.return_value.name, professions_db_model_mock.return_value.description ) session_mock.add.assert_called_once_with(professions_db_model_mock()) session_mock.commit.assert_called_once_with() session_mock.refresh.assert_called_once_with(professions_db_model_mock()) assert result == profession
76383075
76393989
As stated in my question, I would like to access the Fisher weights used in PIRLS model fitting for GLMMs from my glmer() fit in the R package lme4. For such a simple task, I was surprised that I couldn't find any information in the documentation or on the internet at all. By looking at the structure of the glmer fit, I found two possible quantities that might correspond to what I want (but I have no way to know): glmmfit@resp$sqrtWrkWt() and glmmfit@pp$Xwts. They seem to be the same thing (up to very small numerical error). Are these the Fisher weights, or the square root of them? Or is this something else entirely? P.S.: Could someone also confirm that glmmfit@resp$wrkResp() gives the working responses z=G(y-mu)+eta (sometimes called pseudodata), where G is a matrix containing the derivaties of the link function? Unexpectedly, it turns out that when I do GLMM_model@resp$eta+GLMM_model@resp$wrkResids()-GLMM_model@resp$wrkResp(), having added an offset of 4 to the model, I get a vector of fours, not zeros as I would expect..
How to access the Fisher Weight matrix W from glmer() fit?
This is a harder question to answer than it should be, but let's try. There is a draft paper describing the implementation of glmer (an unpublished sequel to the Bates et al. JSS paper available via vignette("lmer", package = "lme4") in this directory (PDF here), but — although it is useful as background reading — it doesn't make a direct connection with the code. The weights are updated here via double glmResp::updateWts() { d_sqrtrwt = (d_weights.array() / variance()).sqrt(); d_sqrtXwt = muEta() * d_sqrtrwt.array(); return updateWrss(); } i.e., d_sqrtrtwt is the square root of the working weights [on the linear predictor or link scale] (to be honest I'm not sure what the r signifies); d_sqrtXwt is those weights transformed back on to the response/data scale (by multiplying by dmu/deta, the derivative of the inverse-link function). From here, sqrtWrkWt is the same as the d_sqrtXwt value computed in updateWts. Here we can see that the weights(., type = "working") returns object@pp$Xwts^2, and we can even see the comment that the working weights available through pp$Xwts should be equivalent to: object@resp$weights*(object@resp$muEta()^2)/object@resp$variance() However, the unit tests in tests/glmmWeights.R suggest that this equivalence is approximate. This may be fine, however, if the discrepancy is due to another instance of the general problem of reference class fields not being updated at the optimum, then this could cause real problems. see for example: https://github.com/lme4/lme4/issues/166 Here we see that wrkResp is defined as (d_eta - d_offset).array() + wrkResids(); and here that wrkResids() is (d_y - d_mu).array() / muEta(); Hopefully you should be able to access all the pieces you need without poking around in the guts this way ... e.g. weights(., "working") should give you the weights; family(.)$mu.eta should give you the derivative of the inverse-link function; residuals(., "working") should give you the working residuals. The clue to why your "PS" is not working is that, as you can see from the code listed above, the $eta component of the @resp slot does not include the offset ... another reason it's best to try to work with accessor methods whenever possible instead of digging around ...
76394283
76394445
I'm working on Ruby on Rails and I'm trying to put a logo.png image in a bootstrap navbar. so i used this code in app/views/home/_header.html.erb <nav class="navbar navbar-expand-lg bg-primary " data-bs-theme="dark"> <div class="container-fluid"> <!-- <a class="navbar-brand" href="#">Estrutecnia</a> --> <a class="navbar-brand" href="#"> <img src="logo.png" alt="Estrutecnia" width="30" height="24"> </a> <button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="collapse navbar-collapse" id="navbarSupportedContent"> <ul class="navbar-nav me-auto mb-2 mb-lg-0"> <li class="nav-item"> <a class="nav-link active" aria-current="page" href="#">Home</a> </li> <li class="nav-item"> <a class="nav-link" href="#">Link</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" role="button" data-bs-toggle="dropdown" aria-expanded="false"> Dropdown </a> <ul class="dropdown-menu"> <li><a class="dropdown-item" href="#">Action</a></li> <li><a class="dropdown-item" href="#">Another action</a></li> <li><hr class="dropdown-divider"></li> <li><a class="dropdown-item" href="#">Something else here</a></li> </ul> </li> <li class="nav-item"> <a class="nav-link disabled">Disabled</a> </li> </ul> <form class="d-flex" role="search"> <input class="form-control me-2" type="search" placeholder="Search" aria-label="Search"> <button class="btn btn-outline-success" type="submit">Search</button> </form> </div> </div> </nav> The problem is that the logo doesn't appear, look at the navbar And the next errors display Started GET "/logo.png" for 127.0.0.1 at 2023-06-02 20:05:47 -0500 ActionController::RoutingError (No route matches [GET] "/logo.png"): I tried to put the logo.png file in the root directory that I guess is the folder where the Gemfile and the README.md files are. I expect that the logo.png appears at the navbar, but a brocken file icon appears at the left side of the navbar, like described in the image above.
Why does my logo not appear in the Bootstrap navbar when using Ruby on Rails?
Images are normally placed in the app/assets/images directory. When you render a view using the ActionView image_tag helper, Rails automatically populates the image source for you: image_tag('logo.png') # => <img src='/assets/icon.png> This would be the idiomatic way to handle images in Rails, but of course there's nothing that prevents you from including <image src='/assets/icon.png'> in your markup
76394394
76394468
I'm relatively new to React and NextJS. I've spent a good 8 hours and a fair amount of research trying to figure this out with no luck! I'm trying to set up a component that picks a word at random from an array, fades this in, then after a delay fades it out and fades in a new word. After hacking together several of my own failed solutions, I found a tutorial and tried their code (with some changes to shift it from TS to JS) but it still doesn't work. I can get the text to change on a timer easily enough, but not the fading. Any ideas? Using NextJS 13, built on StackBlitz's basic NextJS setup index.js import { useEffect, useState } from 'react'; import styles from './index.module.css'; const FADE_INTERVAL_MS = 2000; const WORD_CHANGE_INTERVAL_MS = FADE_INTERVAL_MS * 2; const WORDS_TO_ANIMATE = [ 'Hello', 'Ciao', 'Jambo', 'Bonjour', 'Salut', 'Hola', 'Nǐ hǎo', 'Hallo', 'Hej', '👋🏻', ]; export default function AnimatedText() { const [fadeProp, setFadeProp] = useState('fadeout'); const [wordOrder, setWordOrder] = useState(0); useEffect(() => { const fadeTimeout = setInterval(() => { fadeProp === 'fadein' ? setFadeProp('fadeout') : setFadeProp('fadein'); }, FADE_INTERVAL_MS); return () => clearInterval(fadeTimeout); }, [fadeProp]); useEffect(() => { const wordTimeout = setInterval(() => { setWordOrder( (prevWordOrder) => (prevWordOrder + 1) % WORDS_TO_ANIMATE.length ); }, WORD_CHANGE_INTERVAL_MS); return () => clearInterval(wordTimeout); }, []); return ( <h2 className={styles.white}> <span className={`styles.${fadeProp}`}> {WORDS_TO_ANIMATE[wordOrder]} </span> , I'm bleh. </h2> ); } index.module.css .fadein { animation: fadein 0.3s; -moz-animation: fadein 0.3s; -webkit-animation: fadein 0.3s; -o-animation: fadein 0.3s; } .fadein { animation: fadeout 0.3s; -moz-animation: fadeout 0.3s; -webkit-animation: fadeout 0.3s; -o-animation: fadeout 0.3s; } .white { color: silver; } @keyframes fadein { 0% { opacity: 0; } 100% { opacity: 1; } } @keyframes fadeout { 0% { opacity: 1; color: blue; } 100% { opacity: 0; color: red; } }
React/NextJS Timer-based fade in/fade out of text not working
You are in the right path. But you need to fix a few things. Instead of doing: <span className={`styles.${fadeProp}`}> {WORDS_TO_ANIMATE[wordOrder]} </span> Try it with <span className={fadeProp}> {WORDS_TO_ANIMATE[wordOrder]} </span> The reason being is that you were not setting your css class name selector correclty. I think after that you can tweak it for you convenience. ------ EDIT ----- Also, you will need to create a .css file, not .module.css. I think the reason is because the .module.css are compiled and cannot be dynamic. Create a global.css file inside styles dir. Import it in you _app.js entrypoint inside pages dir. If you do not have it, it looks something like this: _app.js import '../styles/globals.css'; // This default export is required in a new `pages/_app.js` file. export default function MyApp({ Component, pageProps }) { return <Component {...pageProps} />; } Using globals.css is not the best option but you can imrove that later on. Please find more info about styling Nextjs apps here.
76394396
76394477
I am trying to persist some data in my nestjs server so I can then use that data in my client app through http requests. I have created a products.service.ts file with the function getAllData() that fetch some data and creates a new array that is assigned to the variable products (which is the data I'm trying to persist). This function is called when the app is initialized (I know this works because when I run the app the console.log(this.products) inside the function shows data. This is my products.service.ts code: import { Injectable } from '@nestjs/common'; @Injectable() export class ProductsService { private products: any[] = []; getProducts(): any[] { //empty console.log(this.products); return this.products; } async getAllProducts(): Promise<any[]> { const categories = await this.getProductsCategories(); const productsPromises = categories.categories.map(async (category) => { const products = await this.getProductsByCategory(category.strCategory); const modifiedProducts = products.meals.map((product) => { .... }); return modifiedProducts; }); const products = await Promise.all(productsPromises); const flattenedProducts = products.flat(); this.products = flattenedProducts; //shows data console.log(this.products) return flattenedProducts; } async getProductsCategories(): Promise<any>{ try{ const apiURL = 'https://www.themealdb.com/api/json/v1/1/categories.php'; const apiResponse = await fetch(apiURL); const categories = await apiResponse.json(); return categories; } catch(e){ throw new Error('Error while fetching products'); } } async getProductsByCategory(category: string): Promise<any> { try { const apiURL = `https://www.themealdb.com/api/json/v1/1/filter.php?c=${category}`; const apiResponse = await fetch(apiURL); const products = await apiResponse.json(); return products; } catch (e) { throw new Error('Error while fetching products by category'); } } } The function getProducts() is called in my products.controller.ts file when an http request is done in the route '/products' but the products array is empty: import { Controller, Get, Param } from '@nestjs/common'; import { ProductsService } from './products.service'; @Controller('products') export class ProductsController { constructor(private readonly ProductsService: ProductsService) {} @Get('/') async getAllProducts(): Promise<any[]> { const products = await this.ProductsService.getProducts(); return products; } } Any idea why the products variable is empty when I make the request ? It should have the data created with getAllProducts() as this function is called onModuleInit UPDATE 1: I'll add the products.module.ts where I call getAllProducts() onModuleInit: import { Module, OnModuleInit } from '@nestjs/common'; import { ProductsController } from './products.controller'; import { ProductsService } from './products.service'; @Module({ controllers: [ProductsController], providers: [ProductsService], }) export class ProductsModule implements OnModuleInit { constructor(private readonly productsService: ProductsService) {} async onModuleInit() { await this.productsService.getAllProducts(); } } Then I import and use this module at app.module.ts file: import { ProductsModule } from './products/products.module'; ... @Module({ imports: [ProductsModule], controllers: [AppController, ProductsController], providers: [AppService, ProductsService], }) export class AppModule {}
store data in memory with nestjs
Don't add the ProductsController and ProductsService to the AppModule. You essentially have two "versions" of the ProductsController and ProductsService being instantiated, and Nest is calling the one that didn't run the onModuleInit from the ProductsModule. Remove them from the AppModule it and should all work.
76390606
76393898
I want to add chatter for Tags. But odoo does not support tracking of many2many fields. What should i do? Any Suggestions. I tried to put tracking=true on tag_ids field but it does not work. I am stuck here. Please give your suggestions.
How can I add a chatter for many2many fields (tags) in Odoo 16?
You can override the _track_mail function to track x2many fields. Check the code just after # Many2many tracking comment in the project module Example: if len(changes) > len(tracking_value_ids): for changed_field in changes: if tracked_fields[changed_field]['type'] in ['one2many', 'many2many']: field = self.env['ir.model.fields']._get(self._name, changed_field) vals = { 'field': field.id, 'field_desc': field.field_description, 'field_type': field.ttype, 'tracking_sequence': field.tracking, 'old_value_char': ', '.join(initial_values[changed_field].mapped('name')), 'new_value_char': ', '.join(self[changed_field].mapped('name')), } tracking_value_ids.append(Command.create(vals))
76392212
76394011
As I was trying to extract the return type of a request that is intersected, I came across the this mismatch of the return type and the inferred type. Here is the shortened url https://tsplay.dev/mAxZZN export {} type Foo = (() => Promise<string>) & (() => Promise<any>) ; type FooResult = Foo extends () => Promise<infer T> ? T : null // ^? const a:Foo = async () => { return ""; } const b = await a(); // ^? type Foo2 = (() => Promise<any>) & (() => Promise<string>); type FooResult2 = Foo2 extends () => Promise<infer T> ? T : null // ^? const c:Foo2 = async () => { return ""; } const d = await c(); // ^? In these 2 above examples the results are mismatched to FooResult:any b:string and FooResult2:string d:any To make my example more clear instead of having a Foo type I just have a intersected type like HTTPRequest & {json: () => Promise<Type>} to have a correct return type of the request json object. Is there anyway I can make these 2 be matched correctly to same type? If so how? Thanks for the help in advance! <3
Typescript return type of intersected functions is mismatched with the inferred type
An intersection of function types is equivalent to an overloaded function type with multiple call signatures. And there are known limitations when dealing with such types at the type level. Unless you're trying to use overloads and have a strong need for them, you should consider refactoring to use a single call signature instead. When you call an overloaded function, the call is resolved with "the most appropriate" call signature; often the first one in the ordered list that applies: // call signatures function foo(x: string): number; function foo(x: number): string; // implementation function foo(x: string | number) { return typeof x === "string" ? x.length : x.toFixed(1) } const n = foo("abc"); // resolves to first call signature // const n: number const s = foo(123); // resolves to second call signature // const s: string So the return type will depend on the input type. On the other hand, when you try to infer from an overloaded function type, the compiler pretty much only infers from the last call signature: type FooRet = ReturnType<typeof foo> // type FooRet = string // ^^^^^^^^^^^^^^^^^^^^ not (string & number) or [string, number] This is mentioned in the Handbook documentation for using infer in conditional types and is considered a design limitation of TypeScript, as mentioned in microsoft/TypeScript#43301. There are some possible workarounds to try to tease apart multiple call signature information using conditional types, but they are fragile, and before you even think of using them, you should re-examine your use case. If you've got two call signatures with the same parameter types, then they really will not behave well at all: function bar(): { a: string }; function bar(): { b: number }; // why? function bar() { return { a: "", b: 1 } } When you call the function you'll get the first return type: const a = bar(); // const a: { a: string; } But when you infer you'll get the last return type: type BarRet = ReturnType<typeof bar>; // type BarRet = { b: number; } And there doesn't seem to be a reason to have multiple call signatures in such cases. If you want to get an intersection of return types, you should just have one call signature that does that: function baz(): { a: string } & { b: number } { return { a: "", b: 1 } } const ab = baz(); // const ab: { a: string; } & { b: number; } type BazRet = ReturnType<typeof baz>; // type BazRet: { a: string; } & { b: number; } So in your case, (() => Promise<string>) & (() => Promise<any>) is equivalent to an overloaded function whose first no-arg call signature returns Promise<string> and whose second no-arg call signature returns Promise<any>. So you'll get the first when you call and the last when you infer. Instead you should just have a single call signature like () => Promise<string> or () => Promise<any> or whatever your desired type is (the any type is problematic itself, but I won't digress further here to talk about it). Playground link to code
76394330
76394486
I have a question regarding installation of MobSF on my windows 11. its been countless times i tried without any luck. i appreciate the expert here to point out what is my problems.I have downloaded latest MobSF from git C:\Users\User\Documents\Mobile-Security-Framework-MobSF>setup [INSTALL] Checking for Python version 3.8+ [INSTALL] Found Python 3.9.5 [INSTALL] Found pip Requirement already satisfied: pip in c:\users\user\appdata\local\programs\python\python39\lib\site-packages (21.1.1) Collecting pip Downloading pip-23.1.2-py3-none-any.whl (2.1 MB) |████████████████████████████████| 2.1 MB 261 kB/s Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 21.1.1 Uninstalling pip-21.1.1: Successfully uninstalled pip-21.1.1 Successfully installed pip-23.1.2 [INSTALL] Found OpenSSL executable [INSTALL] Found Visual Studio Build Tools [INSTALL] Creating venv Requirement already satisfied: pip in c:\users\user\documents\mobile-security-framework-mobsf\venv\lib\site-packages (21.1.1) Collecting pip Downloading pip-23.1.2-py3-none-any.whl (2.1 MB) |████████████████████████████████| 2.1 MB 384 kB/s Installing collected packages: pip Attempting uninstall: pip Found existing installation: pip 21.1.1 Uninstalling pip-21.1.1: Successfully uninstalled pip-21.1.1 Successfully installed pip-23.1.2 [INSTALL] Installing Requirements Collecting wheel Downloading wheel-0.40.0-py3-none-any.whl (64 kB) ---------------------------------------- 64.5/64.5 kB 53.4 kB/s eta 0:00:00 Installing collected packages: wheel Successfully installed wheel-0.40.0 Ignoring gunicorn: markers 'platform_system != "Windows"' don't match your environment Collecting Django>=3.1.5 (from -r requirements.txt (line 1)) Downloading Django-4.2.1-py3-none-any.whl (8.0 MB) ---------------------------------------- 8.0/8.0 MB 2.2 MB/s eta 0:00:00 Collecting lxml>=4.6.2 (from -r requirements.txt (line 2)) Downloading lxml-4.9.2-cp39-cp39-win_amd64.whl (3.9 MB) ---------------------------------------- 3.9/3.9 MB 524.7 kB/s eta 0:00:00 Collecting rsa>=4.7 (from -r requirements.txt (line 3)) Downloading rsa-4.9-py3-none-any.whl (34 kB) Collecting biplist>=1.0.3 (from -r requirements.txt (line 4)) Downloading biplist-1.0.3.tar.gz (21 kB) Preparing metadata (setup.py) ... done Collecting requests>=2.25.1 (from -r requirements.txt (line 5)) Downloading requests-2.31.0-py3-none-any.whl (62 kB) ---------------------------------------- 62.6/62.6 kB 418.7 kB/s eta 0:00:00 Collecting bs4>=0.0.1 (from -r requirements.txt (line 6)) Downloading bs4-0.0.1.tar.gz (1.1 kB) Preparing metadata (setup.py) ... done Collecting colorlog>=4.7.2 (from -r requirements.txt (line 7)) Downloading colorlog-6.7.0-py2.py3-none-any.whl (11 kB) Collecting macholib>=1.14 (from -r requirements.txt (line 8)) Downloading macholib-1.16.2-py2.py3-none-any.whl (38 kB) Collecting whitenoise>=5.2.0 (from -r requirements.txt (line 9)) Downloading whitenoise-6.4.0-py3-none-any.whl (19 kB) Collecting waitress>=1.4.4 (from -r requirements.txt (line 10)) Downloading waitress-2.1.2-py3-none-any.whl (57 kB) ---------------------------------------- 57.7/57.7 kB 1.0 MB/s eta 0:00:00 Collecting psutil>=5.8.0 (from -r requirements.txt (line 12)) Downloading psutil-5.9.5-cp36-abi3-win_amd64.whl (255 kB) ---------------------------------------- 255.1/255.1 kB 1.4 MB/s eta 0:00:00 Collecting shelljob>=0.6.2 (from -r requirements.txt (line 13)) Downloading shelljob-0.6.3-py3-none-any.whl (9.9 kB) Collecting asn1crypto>=1.4.0 (from -r requirements.txt (line 14)) Downloading asn1crypto-1.5.1-py2.py3-none-any.whl (105 kB) ---------------------------------------- 105.0/105.0 kB 1.5 MB/s eta 0:00:00 Collecting oscrypto>=1.2.1 (from -r requirements.txt (line 15)) Downloading oscrypto-1.3.0-py2.py3-none-any.whl (194 kB) ---------------------------------------- 194.6/194.6 kB 2.0 MB/s eta 0:00:00 Collecting distro>=1.5.0 (from -r requirements.txt (line 16)) Downloading distro-1.8.0-py3-none-any.whl (20 kB) Collecting IP2Location==8.9.0 (from -r requirements.txt (line 17)) Downloading IP2Location-8.9.0-py3-none-any.whl (16 kB) Collecting lief>=0.12.3 (from -r requirements.txt (line 18)) Downloading lief-0.13.1-cp39-cp39-win_amd64.whl (3.1 MB) ---------------------------------------- 3.1/3.1 MB 2.7 MB/s eta 0:00:00 Collecting http-tools>=2.1.1 (from -r requirements.txt (line 19)) Downloading http-tools-2.1.1.tar.gz (550 kB) ---------------------------------------- 550.3/550.3 kB 3.8 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Collecting libsast>=1.5.1 (from -r requirements.txt (line 20)) Downloading libsast-1.5.2.tar.gz (36 kB) Preparing metadata (setup.py) ... done Collecting pdfkit>=0.6.1 (from -r requirements.txt (line 21)) Downloading pdfkit-1.0.0-py3-none-any.whl (12 kB) Collecting google-play-scraper>=0.1.2 (from -r requirements.txt (line 22)) Downloading google_play_scraper-1.2.4-py3-none-any.whl (28 kB) Collecting androguard==3.4.0a1 (from -r requirements.txt (line 23)) Downloading androguard-3.4.0a1-py3-none-any.whl (918 kB) ---------------------------------------- 918.1/918.1 kB 668.1 kB/s eta 0:00:00 Collecting apkid==2.1.4 (from -r requirements.txt (line 24)) Downloading apkid-2.1.4-py2.py3-none-any.whl (116 kB) ---------------------------------------- 116.6/116.6 kB 3.4 MB/s eta 0:00:00 Collecting quark-engine==22.10.1 (from -r requirements.txt (line 25)) Downloading quark_engine-22.10.1-py3-none-any.whl (97 kB) ---------------------------------------- 97.6/97.6 kB 1.4 MB/s eta 0:00:00 Collecting frida==15.2.2 (from -r requirements.txt (line 26)) Downloading frida-15.2.2.tar.gz (11 kB) Preparing metadata (setup.py) ... done Collecting tldextract==3.4.0 (from -r requirements.txt (line 27)) Downloading tldextract-3.4.0-py3-none-any.whl (93 kB) ---------------------------------------- 93.9/93.9 kB 1.8 MB/s eta 0:00:00 Collecting openstep-parser==1.5.4 (from -r requirements.txt (line 28)) Downloading openstep_parser-1.5.4-py3-none-any.whl (4.5 kB) Collecting svgutils==0.3.4 (from -r requirements.txt (line 29)) Downloading svgutils-0.3.4-py3-none-any.whl (10 kB) Collecting ruamel.yaml==0.16.13 (from -r requirements.txt (line 31)) Downloading ruamel.yaml-0.16.13-py2.py3-none-any.whl (111 kB) ---------------------------------------- 111.9/111.9 kB 1.6 MB/s eta 0:00:00 Collecting click==8.0.1 (from -r requirements.txt (line 32)) Downloading click-8.0.1-py3-none-any.whl (97 kB) ---------------------------------------- 97.4/97.4 kB 1.4 MB/s eta 0:00:00 Collecting decorator==4.4.2 (from -r requirements.txt (line 33)) Downloading decorator-4.4.2-py2.py3-none-any.whl (9.2 kB) Collecting asgiref<4,>=3.6.0 (from Django>=3.1.5->-r requirements.txt (line 1)) Downloading asgiref-3.7.2-py3-none-any.whl (24 kB) Collecting sqlparse>=0.3.1 (from Django>=3.1.5->-r requirements.txt (line 1)) Downloading sqlparse-0.4.4-py3-none-any.whl (41 kB) ---------------------------------------- 41.2/41.2 kB 998.2 kB/s eta 0:00:00 Collecting tzdata; sys_platform == "win32" (from Django>=3.1.5->-r requirements.txt (line 1)) Downloading tzdata-2023.3-py2.py3-none-any.whl (341 kB) ---------------------------------------- 341.8/341.8 kB 1.5 MB/s eta 0:00:00 Collecting pyasn1>=0.1.3 (from rsa>=4.7->-r requirements.txt (line 3)) Downloading pyasn1-0.5.0-py2.py3-none-any.whl (83 kB) ---------------------------------------- 83.9/83.9 kB 1.6 MB/s eta 0:00:00 Collecting charset-normalizer<4,>=2 (from requests>=2.25.1->-r requirements.txt (line 5)) Downloading charset_normalizer-3.1.0-cp39-cp39-win_amd64.whl (97 kB) ---------------------------------------- 97.1/97.1 kB 2.8 MB/s eta 0:00:00 Collecting idna<4,>=2.5 (from requests>=2.25.1->-r requirements.txt (line 5)) Downloading idna-3.4-py3-none-any.whl (61 kB) ---------------------------------------- 61.5/61.5 kB 1.7 MB/s eta 0:00:00 Collecting urllib3<3,>=1.21.1 (from requests>=2.25.1->-r requirements.txt (line 5)) Downloading urllib3-2.0.2-py3-none-any.whl (123 kB) ---------------------------------------- 123.2/123.2 kB 904.1 kB/s eta 0:00:00 Collecting certifi>=2017.4.17 (from requests>=2.25.1->-r requirements.txt (line 5)) Downloading certifi-2023.5.7-py3-none-any.whl (156 kB) ---------------------------------------- 157.0/157.0 kB 1.6 MB/s eta 0:00:00 Collecting beautifulsoup4 (from bs4>=0.0.1->-r requirements.txt (line 6)) Downloading beautifulsoup4-4.12.2-py3-none-any.whl (142 kB) ---------------------------------------- 143.0/143.0 kB 1.4 MB/s eta 0:00:00 Collecting colorama; sys_platform == "win32" (from colorlog>=4.7.2->-r requirements.txt (line 7)) Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB) Collecting altgraph>=0.17 (from macholib>=1.14->-r requirements.txt (line 8)) Downloading altgraph-0.17.3-py2.py3-none-any.whl (21 kB) Collecting mitmproxy==6.0.2 (from http-tools>=2.1.1->-r requirements.txt (line 19)) Downloading mitmproxy-6.0.2-py3-none-any.whl (1.1 MB) ---------------------------------------- 1.1/1.1 MB 2.6 MB/s eta 0:00:00 Collecting markupsafe==2.0.1 (from http-tools>=2.1.1->-r requirements.txt (line 19)) Downloading MarkupSafe-2.0.1-cp39-cp39-win_amd64.whl (14 kB) Collecting pyyaml>=6.0 (from libsast>=1.5.1->-r requirements.txt (line 20)) Downloading PyYAML-6.0-cp39-cp39-win_amd64.whl (151 kB) ---------------------------------------- 151.6/151.6 kB 3.0 MB/s eta 0:00:00 Collecting networkx>=2.2 (from androguard==3.4.0a1->-r requirements.txt (line 23)) Downloading networkx-3.1-py3-none-any.whl (2.1 MB) ---------------------------------------- 2.1/2.1 MB 3.9 MB/s eta 0:00:00 Collecting pygments>=2.3.1 (from androguard==3.4.0a1->-r requirements.txt (line 23)) Downloading Pygments-2.15.1-py3-none-any.whl (1.1 MB) ---------------------------------------- 1.1/1.1 MB 5.2 MB/s eta 0:00:00 Collecting matplotlib>=3.0.2 (from androguard==3.4.0a1->-r requirements.txt (line 23)) Downloading matplotlib-3.7.1-cp39-cp39-win_amd64.whl (7.6 MB) ---------------------------------------- 7.6/7.6 MB 5.2 MB/s eta 0:00:00 Collecting pydot>=1.4.1 (from androguard==3.4.0a1->-r requirements.txt (line 23)) Downloading pydot-1.4.2-py2.py3-none-any.whl (21 kB) Collecting ipython>=5.0.0 (from androguard==3.4.0a1->-r requirements.txt (line 23)) Downloading ipython-8.14.0-py3-none-any.whl (798 kB) ---------------------------------------- 798.7/798.7 kB 5.1 MB/s eta 0:00:00 Collecting yara-python-dex>=1.0.1 (from apkid==2.1.4->-r requirements.txt (line 24)) Downloading yara_python_dex-1.0.4-cp39-cp39-win_amd64.whl (130 kB) ---------------------------------------- 130.2/130.2 kB 8.0 MB/s eta 0:00:00 Collecting prettytable>=1.0.0 (from quark-engine==22.10.1->-r requirements.txt (line 25)) Downloading prettytable-3.7.0-py3-none-any.whl (27 kB) Collecting tqdm (from quark-engine==22.10.1->-r requirements.txt (line 25)) Downloading tqdm-4.65.0-py3-none-any.whl (77 kB) ---------------------------------------- 77.1/77.1 kB 4.5 MB/s eta 0:00:00 Collecting graphviz (from quark-engine==22.10.1->-r requirements.txt (line 25)) Downloading graphviz-0.20.1-py3-none-any.whl (47 kB) ---------------------------------------- 47.0/47.0 kB 2.3 MB/s eta 0:00:00 Collecting pandas (from quark-engine==22.10.1->-r requirements.txt (line 25)) Downloading pandas-2.0.2-cp39-cp39-win_amd64.whl (10.7 MB) ---------------------------------------- 10.7/10.7 MB 4.9 MB/s eta 0:00:00 Collecting prompt-toolkit==3.0.19 (from quark-engine==22.10.1->-r requirements.txt (line 25)) Downloading prompt_toolkit-3.0.19-py3-none-any.whl (368 kB) ---------------------------------------- 368.4/368.4 kB 5.8 MB/s eta 0:00:00 Collecting plotly (from quark-engine==22.10.1->-r requirements.txt (line 25)) Downloading plotly-5.14.1-py2.py3-none-any.whl (15.3 MB) ----------------------------- ---------- 11.3/15.3 MB 32.1 kB/s eta 0:02:06 ERROR: Exception: Traceback (most recent call last): File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\urllib3\response.py", line 438, in _error_catcher yield File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\urllib3\response.py", line 561, in read data = self._fp_read(amt) if not fp_closed else b"" File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\urllib3\response.py", line 527, in _fp_read return self._fp.read(amt) if amt is not None else self._fp.read() File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 455, in read n = self.readinto(b) File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\http\client.py", line 499, in readinto n = self.fp.readinto(b) File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\socket.py", line 704, in readinto return self._sock.recv_into(b) File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1241, in recv_into return self.read(nbytes, buffer) File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\ssl.py", line 1099, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\cli\base_command.py", line 169, in exc_logging_wrapper status = run_func(*args) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\cli\req_command.py", line 248, in wrapper return func(self, options, args) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\commands\install.py", line 377, in run requirement_set = resolver.resolve( File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 185, in resolve discovered_reqs.extend(self._resolve_one(requirement_set, req)) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 509, in _resolve_one dist = self._get_dist_for(req_to_install) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\resolution\legacy\resolver.py", line 462, in _get_dist_for dist = self.preparer.prepare_linked_requirement(req) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\operations\prepare.py", line 516, in prepare_linked_requirement return self._prepare_linked_requirement(req, parallel_builds) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\operations\prepare.py", line 587, in _prepare_linked_requirement local_file = unpack_url( File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\operations\prepare.py", line 166, in unpack_url file = get_http_url( File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\operations\prepare.py", line 107, in get_http_url from_path, content_type = download(link, temp_dir.path) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\network\download.py", line 147, in __call__ for chunk in chunks: File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\cli\progress_bars.py", line 53, in _rich_progress_bar for chunk in iterable: File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_internal\network\utils.py", line 63, in response_chunks for chunk in response.raw.stream( File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\urllib3\response.py", line 622, in stream data = self.read(amt=amt, decode_content=decode_content) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\urllib3\response.py", line 587, in read raise IncompleteRead(self._fp_bytes_read, self.length_remaining) File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\contextlib.py", line 135, in __exit__ self.gen.throw(type, value, traceback) File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\venv\lib\site-packages\pip\_vendor\urllib3\response.py", line 443, in _error_catcher raise ReadTimeoutError(self._pool, None, "Read timed out.") pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. [INSTALL] Clean Up =======================MobSF Clean Script for Windows======================= Running this script will delete the Scan database, all files uploaded and generated. C:\Users\User\Documents\Mobile-Security-Framework-MobSF\scripts Deleting all uploads Deleting all downloads Deleting Static Analyzer migrations Deleting Dynamic Analyzer migrations Deleting MobSF migrations Deleting temp and log files Deleting Scan database Deleting Secret file Deleting Previous setup files Deleting MobSF data directory: "C:\Users\User\.MobSF" Done [INSTALL] Migrating Database Traceback (most recent call last): File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\manage.py", line 12, in <module> from django.core.management import execute_from_command_line ModuleNotFoundError: No module named 'django' Traceback (most recent call last): File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\manage.py", line 12, in <module> from django.core.management import execute_from_command_line ModuleNotFoundError: No module named 'django' Traceback (most recent call last): File "C:\Users\User\Documents\Mobile-Security-Framework-MobSF\manage.py", line 12, in <module> from django.core.management import execute_from_command_line ModuleNotFoundError: No module named 'django' Download and Install wkhtmltopdf for PDF Report Generation - https://wkhtmltopdf.org/downloads.html [INSTALL] Installation Complete [ERROR] Installation Failed! Please ensure that all the requirements mentioned in documentation are installed before you run setup script. Scroll up to see any installation errors. The 'decorator==4.4.2' distribution was not found and is required by the application Ive tried git clone reinstall python to 3.9.5 put the environment variables for python
MobSF unable to install on windows 11
The error message says: The 'decorator==4.4.2' distribution was not found So have you tried simply running pip install decorator==4.4.2? According to this github issue, installing the missing package fixed a similar problem.
76390794
76394082
I would like a function (let's call it detect_undefined) which detects variables used in a function, which are not defined inside the function without running it. Examples: import numpy as np def add(a): return np.sum(a) print(detect_undefined(add)) Output: ["np"] def add(a): import numpy as np return np.sum(a) print(detect_undefined(add)) Output: [] b = 3 def add(a): return a + b print(detect_undefined(add)) Output: ["b"] def add(a, b=3): return a + b print(detect_undefined(add)) Output: [] It is crucial that the algorithm works without running the function to be examined, i.e. I cannot do something like try ... except. ChatGPT suggested to use inspect and ast, but its suggestion didn't quite work.
How to detect variables defined outside the function (or undefined) without running the function in python
I'll provide two solutions, the first one doesn't work exactly, but I'll leave the information in case someone sees this post a few years later. Solution1 inspect.getclosurevars or func.__code__.co_names is exactly what it is for. And it should be preferred over the 2nd solution. b = 3 def add(a): return a + b print(inspect.getclosurevars(add)) # ClosureVars(nonlocals={}, globals={'b': 3}, builtins={}, unbound=set()) print(add.__code__.co_names) # ('b',) Unfortunately, there is a bug in it, so it won't work. import numpy as np def add(a): return np.sum(a) print(inspect.getclosurevars(add)) # ClosureVars(nonlocals={}, globals={'np': <module 'numpy' from ...>}, builtins={'sum': <built-in function sum>}, unbound=set()) print(add.__code__.co_names) # ('np', 'sum') # Note: sum is incorrectly detected as a built-in function. Solution2 Use the dis module to analyze the function in bytecode. Below is an example of the information that dis provides. b = 3 def add(a): return np.sum([a, b]) print(dis.dis(add)) Result: 10 0 LOAD_GLOBAL 0 (np) 2 LOAD_METHOD 1 (sum) 4 LOAD_FAST 0 (a) 6 LOAD_GLOBAL 2 (b) 8 BUILD_LIST 2 10 CALL_METHOD 1 12 RETURN_VALUE As you can see, LOAD_GLOBAL lines are what you are looking for. Now we can parse the lines and extract the names. To parse the above result, dis.get_instructions is more useful. def detect_undefined(func): for ins in dis.get_instructions(func): if inspect.iscode(ins.argval): # inner function yield from detect_undefined(ins.argval) elif ins.opname == "LOAD_GLOBAL": # global variable yield ins.argval Here is the complete code with some tests. import dis import inspect import numpy as np def detect_undefined(func): for ins in dis.get_instructions(func): if inspect.iscode(ins.argval): # inner function yield from detect_undefined(ins.argval) elif ins.opname == "LOAD_GLOBAL": # global variable yield ins.argval def add1(a): return np.sum(a) print(list(detect_undefined(add1))) # ['np'] def add2(a): import numpy as np return np.sum(a) print(list(detect_undefined(add2))) # [] b = 3 def add3(a): return a + b print(list(detect_undefined(add3))) # ['b'] def add4(a, b=3): return a + b print(list(detect_undefined(add4))) # [] x = 0 y = 1 z = 2 def complex_func(*args, **kwargs): p = 0 print(f"{x}") def inner1(t): q = y return lambda s: x + add3(z) return inner1(2) print(list(detect_undefined(complex_func))) # ['print', 'x', 'y', 'x', 'add3', 'z'] # Note that x is used twice, so it is detected twice. One last thing to mention is that it also detects global functions. This may not be the OP's desired result, but excluding functions would be quite a challenge. Keep in mind that in Python we can assign functions to variables (b = add) or redefine functions as variables (add = 0). Or even worse, what are we supposed to do with a tuple that contains both variables and functions.
76385145
76393995
I have a weird behavior happening in my code. I've created a Place entity and a SubCategory entity. Because it is a many to many relationship, I created a PlaceSubCategory entity with a composite key. Now, here's what its creation looks like in the DbContext: public DbSet<PlaceSubCategory> PlaceSubCategories { get; set; } And here's what it creates in the migration file: migrationBuilder.CreateTable( name: "PlaceSubCategories", columns: table => new { PlaceId = table.Column<Guid>(type: "uniqueidentifier", nullable: false), SubCategoryId = table.Column<Guid>(type: "uniqueidentifier", nullable: false), CreatedDateTime = table.Column<DateTime>(type: "datetime2", nullable: false, defaultValueSql: "getutcdate()") }, constraints: table => { table.PrimaryKey("PK_PlaceSubCategories", x => new { x.PlaceId, x.SubCategoryId }); table.ForeignKey( name: "FK_PlaceSubCategories_Places_PlaceId", column: x => x.PlaceId, principalTable: "Places", principalColumn: "PlaceId"); table.ForeignKey( name: "FK_PlaceSubCategories_SubCategories_SubCategoryId", column: x => x.SubCategoryId, principalTable: "SubCategories", principalColumn: "SubCategoryId"); }); migrationBuilder.CreateTable( name: "PlaceSubCategory", columns: table => new { PlacesPlaceId = table.Column<Guid>(type: "uniqueidentifier", nullable: false), SubCategoriesSubCategoryId = table.Column<Guid>(type: "uniqueidentifier", nullable: false) }, constraints: table => { table.PrimaryKey("PK_PlaceSubCategory", x => new { x.PlacesPlaceId, x.SubCategoriesSubCategoryId }); table.ForeignKey( name: "FK_PlaceSubCategory_Places_PlacesPlaceId", column: x => x.PlacesPlaceId, principalTable: "Places", principalColumn: "PlaceId", onDelete: ReferentialAction.Cascade); table.ForeignKey( name: "FK_PlaceSubCategory_SubCategories_SubCategoriesSubCategoryId", column: x => x.SubCategoriesSubCategoryId, principalTable: "SubCategories", principalColumn: "SubCategoryId", onDelete: ReferentialAction.Cascade); }); It creates the table twice, which I thought was the default behaviour in EF Core to create a background linking table and so it shows in the migration file, but I'm not sure if that's the case. Also, here's the configuration file for PlaceSubCategory (and not PlaceSubCategories): public class PlaceSubCategoryConfiguration : IEntityTypeConfiguration<PlaceSubCategory> { public void Configure(EntityTypeBuilder<PlaceSubCategory> builder) { // define the composite key for PlaceSubCategory table builder.HasKey(x => new { x.PlaceId, x.SubCategoryId }); builder.HasOne<Place>(d => d.Place) .WithMany(d => d.SubCategoriesLink) .HasForeignKey(d => d.PlaceId) .OnDelete(DeleteBehavior.NoAction) .IsRequired(); builder.HasOne<SubCategory>(d => d.SubCategory) .WithMany(d => d.PlacesLink) .HasForeignKey(d => d.SubCategoryId) .OnDelete(DeleteBehavior.NoAction) .IsRequired(); builder.Property(p => p.CreatedDateTime) .HasConversion(AppDbContext.utcConverter) .HasDefaultValueSql("getutcdate()"); } } But it isn't taking it into account, as you can see by the delete behavior being non-existent in the PlaceSubCategories creation, but there's an automatically set delete behavior for PlaceSubCategory that's not what I set in the configuration file. And of course the configuration has been set: modelBuilder.ApplyConfiguration(new PlaceSubCategoryConfiguration());
Weird behavior in linking table creation in EF Core, adding it in the DbContext but shows two versions in the migration file
The reason was that by creating this linking entity I was doing what EF Core would do anyway which is to create a linking entity in the background that would be showing in the migration file. So my entity got created and EF Core's as well without warning. I deleted my entity and the EF Core's automatically created linking entity got added to the migration file, which seems fine and the same thing as if I created it.
76391247
76394272
I am working on a React eCommerce website. The designs given to me show a flow for checking someone's credit for approval. When the user hits the "Submit Application" button, it takes them to a page that says "Reviewing Application" with a progress bar that takes about 12 seconds to load. When the bar is complete, the text says, "You're Approved" or "Sorry, you are not approved". The designs then show that the Approved screen stays for a few seconds and then they are redirected to the checkout page. My question is, does this break any accessibility rules? It feels wrong to have a user click a button and then all these actions happen and if they are not paying attention, they could miss it telling them that they are approved. The client is very keen on having a good accessibility score and wants to make sure that they don't break any rules. If this isn't allowed, can you please add a link to where it is stated more specifically? All I keep getting when Googleing this are examples of how to build a progress bar. TIA!
Accessibility Rules Check for Progress Bars
There two WCAG checkpoints at play here. The first is dynamic content added to the page. "Reviewing application..." and then "you're approved (or not)". That falls under WCAG 4.1.3 Status Messages. Just make sure the "reviewing application" indicates that the process is running. I'm guessing you have some kind of animation during the 12 seconds or so that it takes to approve or disapprove? You could potentially use aria-busy="true" during the 12 seconds then set it to false when done. Alternatively, you can use an aria-live region and then update the contents of that region to say "reviewing application..." and then maybe update it every 5 seconds or so to say "still reviewing...". You'll probably need aria-atomic="true" if you update the region with the same text ("still reviewing...") multiple times, otherwise the live region won't think anything changed. The second checkpoint is WCAG 2.2.1 Timing Adjustable because you are redirecting the user on your own timing, and like you said, they might miss the approval status. Or they might not have had time to read the approval page. There are 6 ways to fix 2.2.1 as noted in the guideline itself. "Extend" is the most common and you typically see this in "log off" situations where you're about to be logged off due to inactivity but are given a chance to extend your session. The same would be applied to your redirect. You can show the user the approval or denial message and then have a "you will be redirected to the checkout page in XX seconds" message with an option to extend the time limit. Personally, I think the redirection should be avoided altogether. Just tell the user they've been approved or denied and then have a call to action button (CTA) and let the user navigate to the checkout screen on their own time. Then you avoid the 2.2.1 issue.
76394452
76394488
I have the problem is that,When i press remove button,whole list is deleted, for example,i add ABC and DEF,this 2 parts, I press one time and ABC also gone! how can i fix it? <!DOCTYPE html> <html> <head> <title>To-Do List</title> </head> <body> <h1>To-Do List</h1>` <input type="text" class="task" placeholder="Enter a task"> <button class="Add">Add Task</button> <button class="removeTask">Remove</button> <ul class="taskList"></ul> </body> <script> const taskList = document.querySelector(".taskList"); const RemoveEl = document.querySelector(".removeTask"); const addTask = document.querySelector(".Add"); addTask.addEventListener("click", (e)=>{ const taskInput = document.querySelector(".task"); const newTask = document.createElement("li"); let task = taskInput.value; if (task.trim() != "") { newTask.innerText = task; taskList.appendChild(newTask); taskInput.value = "";}; if(newTask){ RemoveEl.addEventListener("click",()=>{ taskList.remove(); }) }}); </script> </html> thank you so much!
about the remove() function,how can i del the last element only?
The issue you're facing is because in the event handler for the Remove button, you are calling remove() on the taskList, which is the ul containing all your tasks. This will remove the entire ul element from the DOM, thereby deleting all tasks. Instead, you should remove only the last task added to the list. Here's how you can do it: <!DOCTYPE html> <html> <head> <title>To-Do List</title> </head> <body> <h1>To-Do List</h1>` <input type="text" class="task" placeholder="Enter a task"> <button class="Add">Add Task</button> <button class="removeTask">Remove</button> <ul class="taskList"></ul> </body> <script> const taskList = document.querySelector(".taskList"); const RemoveEl = document.querySelector(".removeTask"); const addTask = document.querySelector(".Add"); addTask.addEventListener("click", () => { const taskInput = document.querySelector(".task"); const newTask = document.createElement("li"); let task = taskInput.value; if (task.trim() != "") { newTask.innerText = task; taskList.appendChild(newTask); taskInput.value = ""; } }); RemoveEl.addEventListener("click", () => { if (taskList.lastChild) { taskList.lastChild.remove(); } }); </script> </html> This updated script adds an event listener to the Remove button outside of the Add button's event listener. When the Remove button is clicked, it checks if there is a last child element in the task list (the last task added). If there is, it removes that element, leaving the rest of the task list intact.
76389619
76394033
Fatal error: Maximum execution time of 300 seconds exceeded on line 171 I am new in the world of drupal My port no.80 is assigned to my ASP server so I changed my Apache server from 80 to 8080 in httpd.conf , BTW I am using XAMPP server I got Error "Fatal error: Maximum execution time of 300 seconds exceeded in C:\xampp\htdocs\MyDemoWebsite\core\modules\mysql\src\Driver\Database\mysql\Connection.php on line 171" while install site after Set up database in core instal Set up database Image Maximum execution time of 300 seconds exceeded in May be I am doing somthing wrong in Set Up Database enter image description here I have tried by changing the port no. to 8080port no. to 8080
How to solve Fatal error: Maximum execution time of 300 seconds exceeded in Drupal installation on line 171?
Maybe you're doing it first time. So, you have confused Apache port with mysql port. You can see, I have started Xampp, here I am running Apache, with http and https, in ports (80, and 443) now according to the availability of ports in your pc, you can change them Now, comes the database i.e. MySQL, whose port is normally 3306, or you can change to something else, according to your availability. During Drupal installation. The database port which you have put as 8080, will be 3306, if you have not changed it either, which you can check from your xampp control panel, as I have added screenshot. Now, as apache is running on port 8080, then your site will run on localhost:8080/<folder-name>, by default apache port is 80 which is omitted by browser, if that changed to anything else, which requires to be mentioned on http address
76383192
76394700
I have a D3js Plot wrapped inside a React component (v7). For example a Bar Plot with a data table and a parameter for which column to plot. On change of the plotting variable, I do not want to re-render the whole plot but instead execute a D3 transition animation to the new variable. Right now I have tried it following the stump-code here, but first I have problems getting the initial plot to render and second I really would like to understand what the correct React hook way is to achieve this… import * as React from "react"; import * as d3 from "d3"; export function BarPlot({ data, x, width, height, }: { data: DataTable; x: string; width: number; height: number; }) { const svgRef = React.useRef<SVGSVGElement>(null); const svg = d3.select(svgRef.current); const prevX = React.useRef<string>(x); if (svgRef.current === null) { svg.selectAll("*").remove(); svg .append("g") .selectAll() .data(data) // … // Normal d3 plotting code here // … } if (prevX.current !== x) { // Update the plot, animate the transition from plotting the old bars to the new bars prevX.current = x; } return <svg ref={svgRef} width={width} height={height} />; } When looking around for the correct React-way to do this, it seems useEffect is not the right choice here. I also tried to use useMemo to save the inital plot, but even then I need to manually check whether the transitionable parameters have changed… Abstract, I think the question is how to have a React component, where part of the render code is executed intially and another part only if the already rendered component has a change in one of the props.
How can I create a D3 transition animation to update a React component without re-rendering the whole plot?
Here is an example of animated bar chart using React with D3. Just add a useEffect on the SVG element ref and build the chart when ref is valid (when the component is mounted) const MAX_VALUE = 200; const BarChart = ({ data, height, width }) => { const svgRef = React.useRef(null); React.useEffect(() => { const svg = d3.select(svgRef.current); const xScale = d3.scaleBand() .domain(data.map((value, index) => index.toString())) .range([0, width]) .padding(0.1); const yScale = d3.scaleLinear() .domain([0, MAX_VALUE]) .range([height, 0]); const xAxis = d3.axisBottom(xScale) .ticks(data.length) .tickFormat((_, index) => data[index].label); svg .select("#x-axis") .style("transform", `translateY(${height}px)`) .style("font-size", '16px') .call(xAxis); const yAxis = d3.axisLeft(yScale); svg .select("#y-axis") .style("font-size", '16px') .call(yAxis) svg.selectAll('g.tick'); const bars = svg .selectAll(".bar") .data(data) .join("g") .classed("bar", true); bars.append("rect") .style("transform", "scale(1, -1)") .attr("x", (_, index) => xScale(index.toString())) .attr("y", -height) .attr("width", xScale.bandwidth()) .transition() .delay((_, index) => index * 500) .duration(1000) .attr("fill", d => d.color) .attr("height", (d) => height - yScale(d.value)); }, [data]); return ( <svg ref={svgRef} height={height} width={width} /> ); }; const data = [ {value: 50, color: '#008'}, {value: 100, color: '#00C'}, {value: 150, color: '#00f'} ]; ReactDOM.render( <BarChart data={data} width={300} height={170} />, document.getElementById("chart") ); <script src="https://cdnjs.cloudflare.com/ajax/libs/d3/7.8.4/d3.min.js"></script> <script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script> <script crossorigin src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script> <div id='chart'></div>
76390441
76394100
My understanding is that we can pass the information about which procedure should be used inside a Fortran procedure, either using an argument and declaring it to be a procedure name through a specific interface, or using an argument declared to be a procedure pointer. I do not think I have grasped all pros and cons of the two alternatives. Maybe one has some limitations not present with the other? Are there efficiency issues? Just to make clear what this question is about (if necessary), I'll briefly recall through two examples the different ways of passing information about which procedure should be used inside a procedure, through the procedure arguments. using an argument and declaring it to be a procedure name through a specific interface module pro implicit none contains function myfunction(x) result(res) ! this is an actual function with the right signature to be used as argument of the subroutine prosub real, intent(in) :: x real :: res res = 2*x end function myfunction subroutine prosub(f,x) real, intent(in) :: x procedure(myfunction) :: f ! the actual argument used in place of the dummy argument f must be the name of a function with the same signature as myfunction print*,f(x) end subroutine prosub end module pro program ppp use pro implicit none real :: x print*,' x= ?' read*,x call prosub(myfunction,x) end program ppp using an argument declared to be a procedure pointer module pro implicit none abstract interface ! defines the signature of the functions the procedure pointers argument of the subroutine prosub must have function f(x) result(res) real, intent(in) :: x real :: res end function f end interface contains function myfunction(x) result(res) ! one of the possible actual functions the procedure pointer can point to real, intent(in) :: x real :: res res = 2*x end function myfunction subroutine prosub(fp,x) real, intent(in) :: x procedure(f), pointer :: fp print*,fp(x) end subroutine prosub end module pro program ppp use pro implicit none real :: x procedure(f), pointer :: f1 f1 => myfunction print*,' x= ?' read*,x call prosub(f1,x) end program ppp
Procedure pointer vs procedure name as argument of Fortran procedures
A dummy procedure pointer, when contrasted with a non-pointer dummy procedure, may (depending on argument intent and the like): be passed in as disasssociated or undefined, have its association status tested, have its association status changed, pass out information to the calling context. If these capabilities are useful to have when writing a subprogram, then there might be a need for a dummy procedure pointer. Specifying that something is a procedure pointer or non-pointer is orthogonal to how you specify the interface. There is no semantic difference between a declaration of the form procedure(some_interface) :: proc and writing out an interface body for proc that has the same characteristics as some_interface. In your first "non-pointer" program, you could have equally written: subroutine prosub(f,x) real, intent(in) :: x procedure(myfunction) :: f ! or use the abstract interface print*,f(x) end subroutine prosub and in your second "pointer" program, you could have written: subroutine prosub(fp,x) real, intent(in) :: x interface function fp(x) result(res) real, intent(in) :: x real :: res end function fp end interface pointer :: fp print*,fp(x) end subroutine prosub This choice comes down to whether you want to specify the (perhaps abstract) interface in one location, or how much you like typing.
76394304
76394489
I have a big confusion on what is meant by the general method of converting recursive functions (tail and non tail) to iterative ones using stack. I made this simple example to illustrate the point: int sigma1(int n) { if(n == 0) return 0; else return n + sigma1(n-1); } which of the following versions reflects the method correctly: Version 1 int sigma1_iter(int n) { stack* s = create_stack(n); push(s, n); int res = 0; while(!is_empty(s)) { int elm = pop(s); if(elm == 0) return res; else { res += n; push(s, n-1); } } } Version 2 int sigma1_iter(int n) { stack* s = create_stack(n); push(s, n); int res = 0; while(!is_empty(s)) { if (n > 0) { n--; push(s, n); } else { int elm = pop(s); if(elm == 0) return res; else { res += n; } } } Also, in case of the tail recursive version of sigma with an accumulator: int sigma2(int n, int res) { if (n == 0) return res; else return sigma2(n-1, n+res); } How would one convert it? I am thinking of creating a second stack for res, but how that might work?
Confusion regarding converting recursion to iteration using stack
Be sure to post a minimal, reproducible example when you ask a question. That helps everyone understand the question and generate and test potential answers. If you had created such an example, it would be evident that version 1 of your iterative solution never returns and version 2 always returns 0 making neither a good candidate for correctness. Recursion can be challenging, but I think we can walk through a general method for converting a recursive function to an iterative function for this example. Let's look at the recursive implementation for sigma1, int sigma1(int n) { if (n == 0) return 0; else return n + sigma1(n - 1); } Under the hood, calling this function generates a new stack frame for each recursive invocation (assuming the compiler doesn't optimize it away). Each of these call frames will have the value of the argument n for that frame which will decrease by one at each level of recursion. We can think about the iterative version, in the same way, storing the value of n on our explicit std::stack<int> instead of it being handled automagically by the call stack. When the recursion terminates and the call frames unwind, the local value of n is added to the result of the previous call creating a new result to be returned. In the same way, we can pop values from the explicit std::stack<int> and add them to the running total in the iterative version. The function would look something like this, int sigma1_stack(int n) { std::stack<int> stack; for (int i = n; i > 0; --i) stack.push(i); int res{}; while (not stack.empty()) { res += stack.top(); stack.pop(); } return res; } Now, in this particular scenario, you can simplify the two loops into a single iterative loop without a stack. This, of course, will not always be the case. int sigma1_iter(int n) { int res{}; for (int i = n; i > 0; --i) res += i; return res; } I took the code you posted and changed it to compile using a std::stack<int> since you did not post a complete example. I also added a correct version of iteration for both sigma1 and sigma2 and the corresponding output. Hope this is helpful. Sample Code #include <iostream> #include <stack> using std::cout, std::endl; int sigma1(int n) { if (n == 0) return 0; else return n + sigma1(n - 1); } int sigma1_iter_v1(int n) { std::stack<int> stack; stack.push(n); int res = 0; while (not stack.empty()) { int elm = stack.top(); stack.pop(); if (elm == 0) return res; else { res += n; stack.push(n - 1); } } return res; } int sigma1_iter_v2(int n) { std::stack<int> stack; stack.push(n); int res = 0; while (not stack.empty()) { if (n > 0) { n--; stack.push(n); } else { int elm = stack.top(); stack.pop(); if(elm == 0) return res; else { res += n; } } } return res; } int sigma1_iter(int n) { int res{}; for (int i = n; i > 0; --i) res += i; return res; } int sigma1_stack(int n) { std::stack<int> stack; for (int i = n; i > 0; --i) stack.push(i); int res{}; while (not stack.empty()) { res += stack.top(); stack.pop(); } return res; } int sigma2(int n, int res = 0) { if (n == 0) return res; else return sigma2(n - 1, n + res); } int sigma2_iter(int n) { int res{}; for (int i = n; i > 0; --i) res += i; return res; } int main(int argc, const char *argv[]) { cout << "sigma1 : " << sigma1(20) << endl; cout << "sigma1_iter : " << sigma1_iter(20) << endl; cout << "sigma1_stack : " << sigma1_stack(20) << endl; // This never returns // cout << "sigma1_iter_v1 : " << sigma1_iter_v1(20) << endl; cout << "sigma1_iter_v1 : " << "never returns" << endl; cout << "sigma1_iter_v2 : " << sigma1_iter_v2(20) << endl; cout << endl; cout << "sigma2 : " << sigma2(20) << endl; cout << "sigma2_iter : " << sigma2_iter(20) << endl; return 0; } Output sigma1 : 210 sigma1_iter : 210 sigma1_stack : 210 sigma1_iter_v1 : never returns sigma1_iter_v2 : 0 sigma2 : 210 sigma2_iter : 210
76392206
76395043
New to Django here. I have developed a minimum working django website with Postgres as database back-end and nginx/gunicorn as web server on Ubuntu linux. Currently all the files are on my laptop in ~/workspace/djangoapp/src$ in my home directory. I want to now deploy the project to GCP. Which directory, on the production server, the files would go in? It can't be my home directory on the production server. Shouldn't they go in one of the system directories like /opt?
Where to host Django project files on deployment server
If you want to deploy your project on Google Cloud Platform, you should follow GCP guidelines. There are step by step guide on how to deploy and run Django app on GCP as for example running "Django on App Engine standard environment". It would be easier to follow the GCP guides for your production server.
76384215
76394322
I have a deployment of Kubeflow. During creation of Jupyter Notebook, we have image name tags that are longer than that can be displayed in the Docker image name list. See the image list in the attached image. Here the names are short, but for our customer the repository URL is a long fqdn of the harbor registry URL. Is there any configuration that can increase the visible text ? Looking for a configuration to increase the visible text. Tried adding tags to the docker image, but this is only a suffix for the name.
How to increase Kubeflow Jupyter notebook image name to see the complete path
As I understand it, you'd like to display the repository prefix for your Jupyter images. For this, you can set to false the hideRegistry key in the Jupyter Web App ConfigMap. By default, the value of this key is true, which hides the image repository in the user interface. Search this ConfigMap in kubeflow namespace. In Kubeflow Manifests GitHub repository you can found in spawner_ui_config.yaml file.
76394292
76394541
I have a Pandas DataFrame in the following format. I am trying to fill the NaN value by using the most recent non-NaN value and adding one second to the time value. For example, in this case, the program should take the most recent non-NaN value of 8:30:20 and add one second to replace the NaN value. So, the replacement value should be 8:30:21. Is there a way in Pandas to simulate this process for the entire column?
Filling NAN values in Pandas by using previous values
You can convert your data to_timedelta, ffill and add 1 second: df['col1'] = pd.to_timedelta(df['col1']) df['col1'] = df['col1'].ffill().add(df['col1'].isna()*pd.Timedelta('1s')) Output: col1 0 0 days 08:30:18 1 0 days 08:30:19 2 0 days 08:30:20 3 0 days 08:30:21 4 0 days 08:30:22 Used input: df = pd.DataFrame({'col1': ['8:30:18', '8:30:19', '8:30:20', np.nan, '8:30:22']}) converting back to strings Use a custom function: def to_str(s): h,m = s.dt.total_seconds().divmod(3600) m,s = m.divmod(60) return (h.astype(int).astype(str).str.zfill(2) +':'+ m.astype(int).astype(str).str.zfill(2) +':'+ s.astype(int).astype(str).str.zfill(2) ) df['col1'] = to_str(df['col1']) Output: col1 0 08:30:18 1 08:30:19 2 08:30:20 3 08:30:21 4 08:30:22
76383655
76395132
I have a docker-compose file version 3.3 describing a Postgres (PostGIS) database instance and a GeoServer (HTTP backend). I run them with no dependencies, but in the same docker network. Later once PostGIS has done its thing and GeoServer has done its thing as well, I configure a GeoServer datastore with the Postgres connection details via the GeoServer API using the Postgres Docker service name as host, but then when I test these connection details the test fails i.e. GeoServer cannot reach Postgres in the same Docker network. Interesting fact: the Postgres container is on reach from localhost. # this works PGCONNECT_TIMEOUT=2 psql "postgresql://adm_pg_user:1234abcd@localhost:6666/fgag_db" --command "SELECT NOW();" It's a pure TCP/IP issue as I logged into the GeoServer running container and used the command psql to see that Postgres is not on reach, see below and notice pg-db instead of localhost: # from within the GeoServer container this doesn't work PGCONNECT_TIMEOUT=2 psql "postgresql://adm_pg_user:1234abcd@pg-db:6666/fgag_db" --command "SELECT NOW();" The error message is: psql: error: connection to server at "pg-db" (172.19.0.3), port 6666 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? What's wrong with the Docker Compose YAML below? How can I implement networking checks? For completeness: I use this env var COMPOSE_PROJECT_NAME=fgag to be able to add my preferred prefix to docker services / docker network names. Maybe this is relevant to the networking issues. version: "3.3" services: pg-db: image: postgis/postgis:15-3.3-alpine restart: always ports: - "${MYPG_PORT:-5432}:5432" environment: # https://github.com/docker-library/docs/tree/master/postgres#environment-variables POSTGRES_USER: "${MYPG_USER:-adm_pg_user}" POSTGRES_PASSWORD: "${MYPG_PASSWORD:-1234abcd}" POSTGRES_DB: "${MYPG_DB:-fgag_db}" PGPASSWORD: "${MYPG_PASSWORD:-1234abcd}" MYDB_SCHEMA: "${MYDB_SCHEMA:-fgag_schema}" volumes: - ./docker-volume/pg:/var/lib/postgresql/data - ./db-sql-init.sh:/docker-entrypoint-initdb.d/db-sql-init.sh networks: - fgagnet # https://github.com/geoserver/docker/blob/master/docker-compose-demo.yml # geoserver default admin:geoserver credentials # # to connect to the running container: `docker exec -it fgag_geoserver_1 /bin/bash` # # to be able to use `psql`: # apt-get update && apt-get install postgresql-client # # docker network inspect fgag_fgagnet geoserver: image: docker.osgeo.org/geoserver:2.23.0 ports: - "${MYGS_PORT:-7777}:8080" environment: INSTALL_EXTENSIONS: "true" STABLE_EXTENSIONS: wps,csw EXTRA_JAVA_OPTS: -Xms1G -Xmx2G volumes: - ./docker-volume/gs/geoserver_data:/opt/geoserver_data/:Z - ./docker-volume/gs/additional_libs:/opt/additional_libs:Z # by mounting this we can install libs from host on startup networks: - fgagnet networks: fgagnet: driver: bridge
docker compose 3.3 cannot reach other containers
Are you sure that this command connects to the same docker container ? # this works PGCONNECT_TIMEOUT=2 psql "postgresql://adm_pg_user:1234abcd@localhost:6666/fgag_db" --command "SELECT NOW();" Are you sure that you have correctly configured the container to expose port 6666 ? export MYPG_PORT=6666
76390662
76395154
I'm trying to use the UiKit API PHPickerViewController using KMM and Compose for iOS. import androidx.compose.runtime.Composable import androidx.compose.ui.interop.LocalUIViewController import platform.PhotosUI.PHPickerConfiguration import platform.PhotosUI.PHPickerViewController import platform.PhotosUI.PHPickerViewControllerDelegateProtocol import platform.darwin.NSObject @Composable actual fun pickerController() { val uiViewController = LocalUIViewController.current val configuration = PHPickerConfiguration() val pickerController = PHPickerViewController(configuration) val pickerDelegate = object : NSObject(), PHPickerViewControllerDelegateProtocol { override fun picker(picker: PHPickerViewController, didFinishPicking: List<*>) { println("didFinishPicking: $didFinishPicking") picker.dismissViewControllerAnimated(flag = false, completion = {}) uiViewController.dismissModalViewControllerAnimated(false) } } pickerController.setDelegate(pickerDelegate) uiViewController.presentViewController(pickerController, animated = false, completion = null) } This displays the image picker: Unfortunately, when clicking on Cancel, the delegate callback is not called, and I get the following message on the console: [Picker] PHPickerViewControllerDelegate doesn't respond to picker:didFinishPicking: Is it possible to implement the callback in Kotlin? What am I missing?
Can't use PHPickerViewController delegate with KMM
Since pickerDelegate is NSObject, it's lifecycle follows ObjC rules, not KMM memory model. So as soon as the execution leaves composable block, this objects gets released - as setDelegate takes it as weak reference. You can fix it by storing it using remember. Also using your function is dangerous because you're gonna call presentViewController on each recomposition - e.g. if some of your reactive data changes on the calling side. You can update it to return an action that will present it, but store delegate and the action itself using remember: @Composable actual fun rememberOpenPickerAction(): () -> Unit { val uiViewController = LocalUIViewController.current val pickerDelegate = remember { object : NSObject(), PHPickerViewControllerDelegateProtocol { override fun picker(picker: PHPickerViewController, didFinishPicking: List<*>) { println("didFinishPicking: $didFinishPicking") picker.dismissViewControllerAnimated(flag = false, completion = {}) } } } return remember { { val configuration = PHPickerConfiguration() val pickerController = PHPickerViewController(configuration) pickerController.setDelegate(pickerDelegate) uiViewController.presentViewController(pickerController, animated = true, completion = null) } } } Usage: Button(onClick = rememberOpenPickerAction()) { }
76388968
76394534
I'm trying to install a package via conda on an M1 mac. This package has a lot of dependencies, some of which seem to be un-satisfiable due to lack of pre-built packages in conda-forge. I know I can trigger building of packages in conda-forge by issuing a PR like shown here, but I'd prefer sending one big PR with all the packages I need built, rather than trigger building of package A, trying to install, running into dependency B, trigger building of package B, ... Can I somehow list all unmet dependencies of a conda package?
print all unmet dependencies in conda
Trival case: directly missing package First, let's note that this question only has a non-trivial answer when the package in question is noarch. A noarch designation means the package itself is already compatible with osx-arm64. But if it cannot be installed with a plain mamba install, then some non-noarch (compiled) dependency(s) must be missing. Otherwise, if the package were not noarch and does not itself have osx-arm64 builds, then requesting migration for that package would trigger both the package and all its (recursive) dependencies to be made available for osx-arm64. (Conda Forge bot is smart like that!) I'm not assuming OP has any confusion about this, but I want to get ahead of this situation for the future visitors. Now that that's out of the way let's address OP's question proper... Available package, but missing dependencies We can absolutely do this with Mamba's amazing subcommand repoquery. Let's find ourselves a concrete example! Finding an example To illustrate, I know that Conda Forge doesn't have r-terra building for osx-arm64 right now1. We can use the mamba repoquery whoneeds command to list every noarch package that needs r-terra: ## search `conda-forge` and only consider `noarch` $ mamba repoquery whoneeds -c conda-forge -p noarch r-terra ## abridged output, only showing r-base=4.2 packages Name Version Build Depends Channel ──────────────────────────────────────────────────────────────────────── r-rasterdiv 0.2_5.2 r42hc72bb7e_1 r-terra conda-forge/noarch r-rastervis 0.51.2 r42hc72bb7e_1 r-terra conda-forge/noarch r-biomod2 4.2_2 r42hc72bb7e_0 r-terra >=1.6_33 conda-forge/noarch r-biomod2 4.2_3 r42hc72bb7e_0 r-terra >=1.6_33 conda-forge/noarch r-rasterdiv 0.3.1 r42hc72bb7e_0 r-terra conda-forge/noarch r-rastervis 0.51.4 r42hc72bb7e_0 r-terra conda-forge/noarch r-rastervis 0.51.5 r42hc72bb7e_0 r-terra conda-forge/noarch r-spatialeco 2.0_0 r42hc72bb7e_0 r-terra conda-forge/noarch r-rasterdiv 0.2_5.2 r42hc72bb7e_1 r-terra conda-forge/noarch r-rastervis 0.51.2 r42hc72bb7e_1 r-terra conda-forge/noarch r-biomod2 4.2_2 r42hc72bb7e_0 r-terra >=1.6_33 conda-forge/noarch r-biomod2 4.2_3 r42hc72bb7e_0 r-terra >=1.6_33 conda-forge/noarch r-rasterdiv 0.3.1 r42hc72bb7e_0 r-terra conda-forge/noarch r-rastervis 0.51.4 r42hc72bb7e_0 r-terra conda-forge/noarch r-rastervis 0.51.5 r42hc72bb7e_0 r-terra conda-forge/noarch r-spatialeco 2.0_0 r42hc72bb7e_0 r-terra conda-forge/noarch So, all of these are theoretically compatible with osx-arm64, but they depend on the package r-terra that isn't available yet. Let's use for our example, r-spatialeco. Missing dependencies of r-spatialeco Above we used the whoneeds subcommand for a reverse dependency search; now we'll use the depends command for (forward) dependency search: $ mamba repoquery depends -c conda-forge -p osx-arm64 r-spatialeco Executing the query r-spatialeco conda-forge/osx-arm64 Using cache conda-forge/noarch Using cache Name Version Build Channel ───────────────────────────────────────────────────────────────────────────── r-spatialeco 2.0_0 r41hc72bb7e_0 conda-forge/noarch r-mass 7.3_53 r40h4d528fc_0 conda-forge/osx-arm64 r-cluster 2.1.0 r40h09a9d6b_4 conda-forge/osx-arm64 r-rcurl >>> NOT FOUND <<< r-readr 2.0.2 r40h8ea1354_0 conda-forge/osx-arm64 r-sf >>> NOT FOUND <<< r-mgcv 1.8_33 r40hdd02fd4_0 conda-forge/osx-arm64 r-rann 2.6.1 r40h39468a4_2 conda-forge/osx-arm64 r-envstats 2.3.1 r351_1000 conda-forge/noarch r-yaimpute >>> NOT FOUND <<< r-spdep >>> NOT FOUND <<< r-rms >>> NOT FOUND <<< r-terra >>> NOT FOUND <<< r-ks 1.14.0 r41h5d63f41_0 conda-forge/osx-arm64 r-spatstat.explore 3.0_5 r41h5d63f41_0 conda-forge/osx-arm64 r-base 4.1.3 hc39b4fc_7 conda-forge/osx-arm64 r-spatialpack >>> NOT FOUND <<< r-spatstat.geom 3.2_1 r42h21dc0da_0 conda-forge/osx-arm64 And there you have it: the r-spatialeco package is missing seven packages that need to be migrated to osx-arm64, as indicated by the >>> NOT FOUND <<< string. [1]: I know this because I've taken a stab at getting it migrated multiple times and have yet to succeed. :/
76394525
76394579
I have two arrays Arr1 = [1,1,1,2,2,2,3,3] and Arr2 =[1,1,2,1] Comparing both arrays should return True as there are same occurrences of no. 1. However If Arr2 = [1,1,2] it should return false as the no. Of occurrences of 1 or 2 don't match with the no. Of occurrences of 1 and 2 in Arr1 Even Arr2 = [1,1,2,3,1] should return True. Thanks in advance! Cheers I tried this but doesn't work for other instances. function allElementsPresent(first, second) { return second.every((element) => first.includes(element)); }
Count exact occurrences of number in array and return True or False
I believe I understand what you want to accomplish. You want to see if the number of occurrences in the second array matches the first. If that's the case, I've used this answer as a basis function allElementsPresent(first, second, matchAll = false) { if (first.length > 0 && second.length === 0) return false; var counts1st = {}; var counts2nd = {}; for (var num of first) { counts1st[num] = counts1st[num] ? counts1st[num] + 1 : 1; } for (var num of second) { counts2nd[num] = counts2nd[num] ? counts2nd[num] + 1 : 1; } for (var count in counts2nd) { if (matchAll && (!counts1st[count] || counts1st[count] !== counts2nd[count])) return false; if (!matchAll && (counts1st[count] && counts1st[count] === counts2nd[count])) return true; } return matchAll ? true : false; }
76391948
76395205
I have an API protected by IdentityServer with an associated allowed scope. I have two Identity Server clients with permission to access that allowed scope - one accepts client_credentials (for machine-machine operations), and the other accepts authorization_code (for user-machine operations). Within the API itself, how can I determine whether a given client has been authorised by client_credentials or by authorization_code? I can find a few references to the "gty" claim but this is not included in tokens generated by identity server. Is there a way to force IdentityServer to include this claim, or is there some other convention for how to identify whether a request originated from a machine client, or from a user?
IdentityServer - how to get grant_type from within a Protected API?
you can have different clientID and client definitions for the different use cases (Authorization code flow cs. client credentials flow). Then in the client definition for each one, you can add Client Claims, that will be included in the access token and will be included for any user. See https://docs.duendesoftware.com/identityserver/v6/reference/models/client/#token There settings there are related to this: Claims Allows settings claims for the client (will be included in the access token). AlwaysSendClientClaims If set, the client claims will be sent for every flow. If not, only for client credentials flow (default is false) AlwaysIncludeUserClaimsInIdToken When requesting both an id token and access token, should the user claims always be added to the id token instead of requiring the client to use the userinfo endpoint. Default is false. ClientClaimsPrefix If set, the prefix client claim types will be prefixed with. Defaults to client_. The intent is to make sure they don’t accidentally collide with user claims.
76390032
76394730
I have Reactive Component which passes data from one screen to another..., trying to call second component via Navigate(name,params) method. but it gives an error saying "undefined method is not a method". Copying component code below. Guid me to clear the error. import React from 'react'; import { StyleSheet, Text, View, Pressable, } from 'react-native'; import { NavigationContainer } from '@react-navigation/native'; export default function ScreenA(navigation:any) { const onPressHandler = () => { navigation.navigate("Screen_B",{ ItemName: 'Item from ScreenA', ItemId: 12 }); // navigation.navigate("Screen_B"); }
React Native Component + Navigation with Paramameters not working error undefined is not a method
Destructure navigation from function parameter: import React from 'react'; import { StyleSheet, Text, View, Pressable, } from 'react-native'; import { NavigationContainer } from '@react-navigation/native'; export default function ScreenA({navigation}) { // wrap navigation with curly brackets const onPressHandler = () => { navigation.navigate("Screen_B",{ ItemName: 'Item from ScreenA', ItemId: 12 }); }
76388515
76394624
I have a Python function that uses the HuggingFace datasets library to load a private dataset from HuggingFace Hub. I want to write a unit test for that function, but it seems pytest-mock does not work for some reason. The real function keeps getting called, even if the mock structure should be correct. This is the main function: def load_data(token: str): dataset = load_dataset("MYORG/MYDATASET", use_auth_token=token, split="train") return dataset And this is the test function I wrote: def test_data(mocker): # Mocked data token_test = "test_token" mocked_dataset = [ {'image': [[0.5, 0.3], [0.7, 0.9]], 'timestamp': datetime.date(2023, 1, 1)}, ] mocker.patch('datasets.load_dataset', return_value=mocked_dataset) result = load_data(token_test) assert len(result) == 1 Could it be that there are some "unmockable" libraries which do stuff under the hood and make their functions impossible to stub?
Why mocking HuggingFace datasets library does not work?
The official Python documentation has this part: where-to-patch. If your module is called my_module, and it does from datasets import load_dataset then you should patch mocker.patch('my_module.load_dataset' so that your module is using the mock. Patching datasets.load_dataset might be too late, since if the import in your module happened before that instruction, it has no effect.
76390333
76394804
I have an application that displays images using 8-bit ANSI-art. This is handy for viewing images on a remote machine via SSH. When the viewer starts, it replaces all the 8-bit palette values from 16-255 with my standard values. This runs fine on the Mac terminal, but the xterm display flickers because it is trying to refresh while it is still executing these 240 commands. I get the same thing when the program exits, and I have to reset the colours. Here is the code for resetting the palette when the program exits. for (int N=16; N<256; ++N) printf("\e]104;%d\a", N); I can see the colours changing in the terminal as it runs. I have not found an escape code that resets the whole palette. Sending Ctrl-C resets some things but not the palette. All the examples I have found use a loop like this. It would be nice to find a reset escape sequence if there is one, but I will also need a way to pause the screen refresh until the commands have finished. I tried running all the escape sequences into one big string and submitting all of them at once with puts(). That did not do the trick. I hoped to find an escape sequence that pauses the screen refresh, and another that un-pauses it. It seems like something that ought to be there, and I am not seeing it. Or, if we know for sure that no such escape sequences exist, then I can stop looking. There may be other ways of fixing this other than escape codes. However, the application is handiest when working remotely, so I want it to work on whatever terminal is running at the time, not just xterm. I can ignore the flickering if I have to. PS: I have a workaround. Xterm supports 24-bit Truecolor. I can use that and not change the 8-bit palette.
Is there an escape code to pause the terminal screen refresh?
Not what the title asks for, but enabling the alternative screen buffer is a fix. Print "\e[?1049h to enable when the program starts and and "\e[?1049l to disable. The Wikipedia entry says little more than that, so it is easy to ignore. Enabling the alternative screen buffer means the application works with a terminal-sized window of characters. This is probably what an interactive terminal program needs rather than add all the screen refreshes to the current scrolling terminal buffer. Disabling restores the terminal to its previous state.
76391329
76395290
I am working on a NER problem—hence the BIO tagging—with a very small dataset, and I am manually splitting it into train, validation, and test data. Thus, to make the first of two splits, I need to sort lists of tuples into two lists based on the count of 'B' in data. I am shuffling data, so the output varies, but it typically yeilds what I provide below. data can be split such that a total count of 10 instances of 'B' is possible in bin_1. So it's not that data won't split this way given the way B is distributed through the lists of tuples. How do I get the split that I am after? For this example, and the desired split, I want the total count of 'B' in bin_1 to be 10, but it's always over. Assistance would be much appreciated. Data: data = [[('a', 'B'), ('b', 'I'), ('c', 'O'), ('d', 'B'), ('e', 'I'), ('f', 'O')], [('g', 'O'), ('h', 'O')], [('i', 'B'), ('j', 'I'), ('k', 'O')], [('l', 'B'), ('m', ''), ('n', 'B'), ('o', 'O')], [('p', 'O'), ('q', 'O'), ('r', 'O')], [('s', 'B'), ('t', 'O')], [('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O')], [('z', 'B')], [('a', 'B'), ('b', 'I'), ('c', 'O')], [('d', 'O')], [('e', 'O'), ('f', 'O')], [('g', 'O'), ('h', 'B')], [('i', 'B'), ('j', 'I')], [('k', 'O')], [('l', 'O'), ('m', 'O'), ('n', 'O'), ('o', 'O')], [('p', 'O'), ('q', 'O'), ('r', 'O'), ('s', 'B'), ('t', 'O')], [('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O'), ('z', 'B')]] Current code: split = 0.7 d = [] total_B = 0 bin_1 = [] bin_2 = [] counter = 0 random.shuffle(data) for f in data: cnt = {} for _, label in f: if label in cnt: cnt[label] += 1 else: cnt[label] = 1 d.append(cnt) for f in d: total_B += f.get('B', 0) for f,g in zip(d, data): if f.get('B') is not None: if counter <= round(total_B * split): counter += f.get('B') bin_1.append(g) else: bin_2.append(g) print(round(total_B * split)) print(sum(1 for sublist in bin_1 for tuple_item in sublist if tuple_item[1] == 'B')) print(sum(1 for sublist in bin_2 for tuple_item in sublist if tuple_item[1] == 'B')) Current output: Total count of 'B' in 'bin_1' should be: 10 Total count of 'B' in 'bin_1' is': 11 Total count of 'B' in 'bin_2' is': 3 bin_1, bin_2 >>> [[('a', 'B'), ('b', 'I'), ('c', 'O')], [('g', 'O'), ('h', 'B')], [('i', 'B'), ('j', 'I'), ('k', 'O')], [('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O'), ('z', 'B')], [('s', 'B'), ('t', 'O')], [('l', 'B'), ('m', ''), ('n', 'B'), ('o', 'O')], [('a', 'B'), ('b', 'I'), ('c', 'O'), ('d', 'B'), ('e', 'I'), ('f', 'O')], [('i', 'B'), ('j', 'I')]], [[('u', 'O'), ('v', 'B'), ('w', 'I'), ('x', 'O'), ('y', 'O')], [('z', 'B')], [('p', 'O'), ('q', 'O'), ('r', 'O'), ('s', 'B'), ('t', 'O')]] Desired output: Total count of 'B' in 'bin_1' should be: 10 Total count of 'B' in 'bin_1' is': 10 Total count of 'B' in 'bin_2' is': 4
How do you sort lists of tuples based on the count of a specific value?
One possible solution is to get the distribution of the 'B' among indexes of your data. Let's say data was shuffled already, make use of: dict com prehension: https://peps.python.org/pep-0274/ enumarate: https://python-reference.readthedocs.io/en/latest/docs/functions/enumerate.html def get_distribution(data): return {i: len([x for x in t if (x[1] == 'B')]) for i, t in enumerate(data) } For data you get: distribution = get_distribution(data) print(distribution) #=> {0: 2, 1: 0, 2: 1, 3: 2, 4: 0, 5: 1, 6: 1, 7: 1, 8: 1, 9: 0, 10: 0, 11: 1, 12: 1, 13: 0, 14: 0, 15: 1, 16: 2} Now, iterate over distribution and fill your bins. You can develop a more complex algorithm, this is the simplest: bin_1 = [] bin_2 = [] ratio = 0.7 count = 0 total = sum(distribution.values()) for k, v in distribution.items(): if count/total < ratio: bin_1.append(data[k]) count += v else: bin_2.append(data[k]) So, check: print(bin_1) print(bin_2) distr_bin_1 = get_distribution(bin_1) distr_bin_2 = get_distribution(bin_2) print(distr_bin_1) print(distr_bin_2) count_bin_1 = sum(distr_bin_1.values()) count_bin_2 = sum(distr_bin_2.values()) print(count_bin_1/(count_bin_1 + count_bin_2)) # actual ratio
76391771
76395312
I imported this table from Excel into Jupyter. I was using pandas for this. But now I want to mark markers with cities from table below on my map with popups with the data from colognes A1,A2,A3 and I don't know how to do this. Example: I press on a definite marker and after this the popup appears with this data from colognes A1,A2,A3. Image 1: Image 2: Can you tell me the corresponding code for this operation?
How to add the data of this table in markers (pop-up) on map?
To display the pop-up in tabular form at each location, a loop process is performed on each row of the data frame, converting one row from a series to a data frame, and then transposing it. I also adjust the width of the data frame. import pandas as pd import io import folium data = ''' Name A1 A2 A3 LAT LON "Malibu Beach" 0.63 0.55 0.95 34.03194 -118.698387 Commerce 0.17 0.45 0.25 34.00031 -118.159770 "Long Beach" 0.19 0.21 0.09 33.77171 -118.181310 ''' df = pd.read_csv(io.StringIO(data), delim_whitespace=True) import folium m = folium.Map([df.LAT.mean(), df.LON.mean()], zoom_start=8) for i in range(len(df)): html = df.loc[i,['Name','A1','A2','A3']].to_frame().T.to_html( classes="table table-striped table-hover table-condensed table-responsive" ) popup = folium.Popup(html, max_width=500) folium.Marker([df.iloc[i]['LAT'], df.iloc[i]['LON']], popup=popup).add_to(m) m
76391309
76395320
Context I'm converting a PNG sequence into a video using FFMPEG. The images are semi-transparent portraits where the background has been removed digitally. Issue The edge pixels of the subject are stretched all the way to the frame border, creating a fully opaque video. Cause Analysis The process worked fine in the previous workflow using rembg from command line however, since I started using rembg via python script using alpha_matting to obtain higher quality results, the resulting video has these issues. The issue is present in both webm format (target) and mp4 (used for testing). Command Used Command used for webm is: ffmpeg -thread_queue_size 64 -framerate 30 -i <png sequence location> -c:v libvpx -b:v 0 -crf 18 -pix_fmt yuva420p -auto-alt-ref 0 -c:a libvorbis <png output> Throubleshooting Steps Taken PNG Visual inspection The PNG images have a fully transparent background as desired. PNG Alpha Measurement I have created a couple of python scripts to look at alpha level in pixels and confirmed that there is no subtle alpha level in the background pixels Exported MP4 with AE Using the native AE renderer the resulting MP4/H.265 has a black background, so not showing the stretched pixel issue Image of the Issue Sample PNG Image from sequence Code Context rembg call via API using alpha_matting seems to generate a premultiplied alpha which uses non black pixels for 0 alpha pixels. remove(input_data, alpha_matting=True, alpha_matting_foreground_threshold=250, alpha_matting_background_threshold=250, alpha_matting_erode_size=12) A test using a rough RGB reset of 0-alpha pixels confirms that the images are being played with their RGB value ignoring Alpha. def reset_alpha_pixels(img): # Open the image file # Process each pixel data = list(img.getdata()) new_data = [] for item in data: if item[3] == 0: new_data.append((0, 0, 0, 0)) else: new_data.append((item[0], item[1], item[2], item[3])) # Replace the alpha value but keep the RGB # Update the image data img.putdata(new_data) return img Updates Added python context to make the question more relevant within SO scope.
ffmpeg - stretched pixel issue
The issue is related to the video player. Most video players doesn't support transparency, and ignores the alpha (transparency) channel. The video player displays the rgb content of the background even if the background is supposed to be hidden (background pixels are fully according to their alpha value). Apparently, rembg output background is not filled with solid black or white, but having the stretched effect. When opening the PNG image, and when video in Chrome browser for example, the background is transparent (RGB values are hidden), and we can't see the "stretched effect". Solving the issue using FFMPEG is challenging. We better fix the issue in the Python code after applying rembg. For fixing the issue, me may select a solid background color like (200, 200, 200) gray background, and apply alpha compositing between RGB channels and the background. Extract RGB channels: foreground_rgb = image_after_rembg[:, :, 0:3] # Extract RGB channels. Extract alpha (transparency) channel and convert from range [0, 255] to [0, 1]: alpha = image_after_rembg[:, :, 3].astype(np.float32) / 255 # Extract alpha (transparency) channel and convert from range [0, 255] to [0, 1]. alpha = alpha[..., np.newaxis] # Add axis - new alpha shape is (1024, 1024, 1). We need it for scaling 3D rgb by 2D alpha channel. Set background RGB color to light gray color (for example): background_rgb = np.full_like(foreground_rgb, (200, 200, 200)) # Set background RGB color to light gray color (for example). Apply "alpha compositing" of rgb and background_rgb: composed_rgb = foreground_rgb.astype(np.float32) * alpha + background_rgb.astype(np.float32) * (1 -alpha) composed_rgb = composed_rgb.round().astype(np.uint8) # Convert to uint8 with rounding. Add the original alpha channel to composed_rgb: composed_rgba = np.dstack((composed_rgb, alpha_ch)) Complete Python code sample: from PIL import Image import numpy as np #from rembg import remove #image_file_before_rembg = 'input.png' image_file_after_rembg = 'frame-00001.png' # Assume code for removing background looks as follows: #image_before_rembg = Image.open(image_file_before_rembg) #image_after_rembg = remove(image_before_rembg) #image_after_rembg.save(image_file_after_rembg) image_after_rembg = Image.open(image_file_after_rembg) # Skip background removing, and read the result from a file. image_after_rembg = np.array(image_after_rembg) # Convert PIL to NumPy array. foreground_rgb = image_after_rembg[:, :, 0:3] # Extract RGB channels. alpha_ch = image_after_rembg[:, :, 3] # Extract alpha (transparency) channel alpha = alpha_ch.astype(np.float32) / 255 # Convert alpha from range [0, 255] to [0, 1]. alpha = alpha[..., np.newaxis] # Add axis - new alpha shape is (1024, 1024, 1). We need it for scaling 3D rgb by 2D alpha channel. background_rgb = np.full_like(foreground_rgb, (200, 200, 200)) # Set background RGB color to light gray color (for example). # Apply "alpha compositing" of rgb and background_rgb composed_rgb = foreground_rgb.astype(np.float32) * alpha + background_rgb.astype(np.float32) * (1 -alpha) composed_rgb = composed_rgb.round().astype(np.uint8) # Convert to uint8 with rounding. composed_rgba = np.dstack((composed_rgb, alpha_ch)) # Add the original alpha channel to composed_rgb Image.fromarray(composed_rgba).save('new_frame-00001.png') # Save the RGBA output image to PNG file Executing FFmpeg: ffmpeg -y -framerate 30 -loop 1 -t 5 -i new_frame-00001.png -vf "format=rgba" -c:v libvpx -crf 18 -pix_fmt yuva420p -auto-alt-ref 0 out.webm When playing with Chrome browser, the background is transparent. When playing with VLC Player, the background is light gray: Using FFmpeg CLI, we have to use alphaextract, overlay and alphamerge filters. Example (5 seconds at 3fps for testing): ffmpeg -y -framerate 3 -loop 1 -i frame-00001.png -filter_complex "color=white:r=3[bg];[0:v]format=rgba,split=2[va][vb];[vb]alphaextract[alpha];[bg][va]scale2ref[bg0][v0];[bg0][v0]overlay=shortest=1,format=rgb24[rgb];[rgb][alpha]alphamerge" -c:v libvpx -crf 18 -pix_fmt yuva420p -auto-alt-ref 0 -t 5 out.webm
76389179
76395808
i have an excel sheet the has column A which the search value will be in , and it should retrieve results from column B,the code should whenever I enter a value in textbox (txtreg) get results in Listbox (txtledglist) which might be 1 result or more up to 6 . the code that I have is this: whenever I type the search value that has just 1 result brings it fine, but when it has multiple results it gets it but takes over 5 min and somtimes excel crashes which is really unsuall. or when I delete the search value to try and enter a new one it also craches , when I check the VBA I see that the code keeps running which is causing the excel to crash. any ideas to make the code simpler or what am I doing wrong? thanks. Private Sub txtreg_Change() Dim wb As Workbook Dim ws As Worksheet Dim lookupValue As String Dim results() As Variant Dim rng As Range Dim cell As Range Dim index As Long Dim count As Long Set wb = ThisWorkbook Set ws = wb.Sheets("L 403") lookupValue = txtreg.Value txtledglist.Clear Set rng = ws.Range("A:B") On Error Resume Next Set cell = rng.Columns(1).Find(What:=lookupValue, LookIn:=xlValues, LookAt:=xlWhole) On Error GoTo 0 If Not cell Is Nothing Then count = 0 Do count = count + 1 ' Find the next match Set cell = rng.Columns(1).FindNext(cell) Loop While Not cell Is Nothing And cell.Address <> rng.Columns(1).Find(What:=lookupValue, After:=cell, LookIn:=xlValues, LookAt:=xlWhole).Address ReDim results(1 To count) Set cell = rng.Columns(1).Find(What:=lookupValue, LookIn:=xlValues, LookAt:=xlWhole) index = 1 Do results(index) = rng.Columns(2).Cells(cell.Row - rng.Cells(1).Row + 1).Value ' Adjusting for header row index = index + 1 Set cell = rng.Columns(1).FindNext(cell) Loop While Not cell Is Nothing And cell.Address <> rng.Columns(1).Find(What:=lookupValue, After:=cell, LookIn:=xlValues, LookAt:=xlWhole).Address txtledglist.List = results End If End Sub
find values on a user form from text box and results shown on a List box
First of all, you should save the first cell you found, so that you don't have to call .Find again, which would restart the search. Also, you don't need to compare against Nothing in the loop condition. Second, I don't clearly understand the line with the comment "Adjusting for header row". Anyway, there is no adjustment necessary: the two cells are in the same row. The corrected code: Private Sub txtreg_Change() Dim lookupValue As String: lookupValue = txtreg.Value Dim wb As Workbook: Set wb = ThisWorkbook Dim ws As Worksheet: Set ws = wb.Sheets("L 403") txtledglist.Clear Dim rng As Range: Set rng = ws.Range("A:B") On Error Resume Next Dim firstCell As Range: Set firstCell = rng.Columns(1).Find(What:=lookupValue, LookIn:=xlValues, LookAt:=xlWhole) On Error GoTo 0 If Not firstCell Is Nothing Then Dim cell As Range: Set cell = firstCell Dim count As Long: count = 0 Do count = count + 1 Set cell = rng.Columns(1).FindNext(cell) Loop While cell.Address <> firstCell.Address Dim results() As Variant: ReDim results(1 To count) Set cell = rng.Columns(1).Find(What:=lookupValue, LookIn:=xlValues, LookAt:=xlWhole) Dim index As Long: index = 1 Do results(index) = rng.Cells(cell.Row, 2).Value index = index + 1 Set cell = rng.Columns(1).FindNext(cell) Loop While cell.Address <> firstCell.Address txtledglist.List = results End If End Sub I don't exactly know your whole application but may be it is worth avoiding .Find and .FindNext.
76391148
76395439
i want to create a navigation bar for my website that includes some links and a company logo upon it. The links should have customize spacing in them and means some at first and some at last of right edge . i also want to include an transformation that when the links are hovered the font size of the links increases without affecting or shifting the nearby links or content. i am having a problem as the font increases the neighbours links shifts themselves to maintain margin gap . What should i change or add in my code to do so ? i basically tried to use hover subclass of link 'a'. it worked but not perfectly it shift the neighbour links . i am using margin left and right for each link and assigning unique margin for each. when i hover on a link each of the other links shifts themselves to maintain margin respectivley . i think is it good to use margin property in such cases or i should use float if yes then how to align them at particular distances. i am providing an online editor link of my code at end . Or here is some part of my css code body { font-family: Arial, sans-serif; line-height: 1.5; background-color: #f7f7f7; margin: 0; } header { background-color: black; } .navigation-bar { width: 100%; height: 76px; display: flex; padding: 10px; align-items: center; } .logo img { position: relative; left: 20%; height: 80px; width: auto; } .navigation-links { display: flex; list-style: none; margin: 0; padding: 0; } .navigation-links li { display: inline-block; margin-left: 90px; margin-right: 0px; } .navigation-links li a { font-size: 25px; color: white; text-decoration: none; text-align: center; } .navigation-links li a:hover { font-size: 30px; } /* join now class */ .navigation-links li a.special1 { font-size: 25px; font-weight: bold; color: white; margin-left: 60px; margin-right: 0px; text-decoration: none; border: 2px solid rgb(226, 19, 54); border-radius: 50px; padding: 15px 10px; background-color: rgb(226, 19, 54); } /* login class */ .navigation-links li a.special2 { font-size: 25px; font-weight: bold; color: white; margin-left: 2px; margin-right: 0px; text-decoration: none; border: 2px solid white; border-radius: 50px; padding: 15px 40px; background-color: black; } /*resposive nature*/ @media screen and (max-width: 768px) { .navigation-links { display: none; } } <!DOCTYPE html> <html lang="en"> <link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Lato"> <head> <title>Navigation Bar Example</title> <link rel="stylesheet" type="text/css" href="styles2.css"> </head> <body> <header> <div> <nav class="navigation-bar"> <div class="logo"> <img src="rocket-g9cbacc798_1280.png" alt="Company Logo" > </div> <ul class="navigation-links"> <li><a href="#">Home</a></li> <li><a href="#">Projects</a></li> <li><a href="#">Services</a></li> <li><a href="#">About</a></li> <li ><a class="special1"href="#">JOIN NOW</a></li> <li ><a class="special2" href="#">LOG IN</a></li> </ul> </nav> </div> </header> <!-- Rest of the content --> </body> </html> code link - https://codepen.io/Divyansh-Sharma-the-flexboxer/pen/JjmgJVy
How to avoid links shifting on hover while increasing font size in a navigation bar?
In the example below, there many ascetic changes which are optional. The following CSS is required: li { /* Start vertically and horizontally center of `<li>` when transforming */ transform-origin: center center; /* Original state is at normal size */ transform: scale(1.0); /* When state changes, stretch the duration by 0.7 seconds in a easing pattern */ transition: 0.7s ease /* ✥ */; } /* When the user hovers over a <li>... */ li:hover { /* ...take it out of the normal "flow" of the document... */ position: relative; /* ...give it a higher position on the z-axis... */ z-index: 1; /* ...increase it's size by 20% */ transform: scale(1.2) /* ✥ */; } /* ✥ Values can vary according to preference */ Initially each <li> is inert but has instructions when it is hovered over. When a <li> is hovered over it is out of the normal "flow" of the document and it's size will not interfere with any "static" elements (any element that doesn't have position: relative/absolute/fixed/sticky). Note: Please review the example in Full Page mode, viewing in the iframe doesn't render perfectly (links are too small). :root { margin: 0; font: 5vmin/1.15 Lato; } body { min-height: 100vh; margin: 0; background-color: #f7f7f7; } header { background-color: black; } nav { display: flex; align-items: center; width: 100%; height: clamp(3ex 80px 10vh); } nav img { display: inline-block; width: 20vw; height: auto; margin-right: 1rem; } menu { display: flex; flex-flow: row nowrap; align-items: center; list-style: none; margin: 0; padding: 0; } menu li { margin-right: 1.5rem; text-align: center; transform-origin: center center; transform: scale(1.0); transition: 0.7s ease; } menu li a { font-size: clamp(5rem 8vw 10rem); color: white; text-decoration: none; } menu li:hover { position: relative; z-index: 1; transform: scale(1.2); } .btn { min-width: 3rem; margin-right: 0.75rem; padding: 0.25rem 0.5rem; border: 2px solid rgb(226, 19, 54); border-radius: 50px; font-weight: bold; font-variant: small-caps; } .join { background-color: rgb(226, 19, 54); } .login { background-color: black; } @media screen and (max-width: 300px) { menu { display: none; } <!DOCTYPE html> <html lang="en"> <head> <title>Navigation Bar Example</title> <link href="https://fonts.googleapis.com/css?family=Lato" rel="stylesheet"> </head> <body> <header> <nav> <img src="https://www.clipartmax.com/png/middle/31-316935_universe-rocket-icon-svg.png " alt="Company Logo"> <menu> <li><a href="#">Home</a></li> <li><a href="#">Projects</a></li> <li><a href="#">Services</a></li> <li><a href="#">About</a></li> <li class="btn join"><a href="#">Join Now</a></li> <li class="btn login"><a href="#">Log In</a></li> </menu> </nav> </header> <!-- Rest of the content --> </body> </html>
76394497
76394646
This is my activity: SearchView on top, RecyclerView on middle, and a Button at bottom <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/app_background" tools:context=".ui.home.customer.CustomerListActivity"> <androidx.appcompat.widget.SearchView android:background="@color/white" android:layout_marginLeft="20dp" android:layout_marginRight="20dp" android:layout_marginTop="20dp" android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/searchCustomer" /> <androidx.recyclerview.widget.RecyclerView android:layout_marginLeft="20dp" android:clipToPadding="false" android:layout_marginRight="20dp" android:layout_marginTop="20dp" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@id/searchCustomer" tools:listitem="@layout/row_item_customer" android:id="@+id/rvCustomers" /> <com.google.android.material.button.MaterialButton android:id="@+id/btnAddCustomer" android:layout_marginLeft="20dp" android:layout_marginRight="20dp" android:layout_alignParentBottom="true" android:layout_marginBottom="20dp" android:textColor="@color/text_blue_1" app:backgroundTint="@color/white" android:layout_width="match_parent" android:layout_height="53dp" android:text="Add Customer" /> </RelativeLayout> Turns out it doesn't work as I expected, because the RecyclerView overlaps/crosses the Button. I want the RecyclerView to "stay" on top of the Botton. How to fix this?
RelativeLayout problem: how to set RecyclerView not to overlap a Button on bottom
Just add following line in recyclerView android:layout_above="@id/btnAddCustomer"
76395850
76395863
I am trying to save links to photos in a topic on an internet forum in a txt file. I tried many ways, but the links of one page are saved in a txt file, and when the loop goes to the next page of the topic, the previous links are deleted and new links are replaced! I want to have all the links together. This is my code: from bs4 import BeautifulSoup import requests def list_image_links(url): response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") # Separation of download links image_links = [] for link in soup.find_all('a'): href = link.get('href') if href is not None and 'attach' in href and href.endswith('image')==False: image_links.append(href) # Writing links in a txt file with open('my_file.txt', 'w') as my_file: my_file.write('image links:' + '\n') for branch in image_links: my_file.write(branch + '\n') print('File created') # Browse through different pages of the topic i = 0 while i <= 5175: list_image_links(f'https://forum.ubuntu.ir/index.php?topic=211.{i}') i = i+15 It is clear from the comments what each section does. Thank you in advance for your help.
Writing a large collection of lists to a txt file in Python
You need to append to the file. This can be achieved by using 'a' instead of 'w' as an argument to open(). When using 'w' a file will be created if it does not exist and it will always truncate the file first, meaning it will overwrite its contents. With 'a' on the other hand the file will also be created if it does not yet exists, but it won't truncate but instead append to the end of the file if it already exists, meaning the content will not be overridden. See Python docs. So for your example the line with open('my_file.txt', 'w') as my_file: would need to be changed to: with open('my_file.txt', 'a') as my_file:
76394508
76394648
The code I am developing is for mobile site and the animation I expect is as following: Firstly a bigger circle appears such that it covers the whole screen looking like a splash screen and afterwards the bigger circle will transition into smaller circle towards bottom left. Along with that image inside the bigger circle will transit towards bottom left. The Problem is circle is transitioning properly but the text is not going to bottom left properly it kind of goes to left and then goes down. Below is the code that I tried. setTimeout(function() { let i = document.getElementById("test"); let d = document.getElementById("icon-img"); i.classList.add("active"); d.classList.add("active"); }, 2000); .test { position: fixed; left: 0; bottom: 0; width: 40px; display: flex; align-items: center; justify-content: center; height: 40px; transition: all 3s ease; background: gray; transform: scale(100); border-radius: 30px; left: 20px; bottom: 20px; } .test.active { transform: scale(1); transition: all 2s ease; left: 20px; bottom: 20px; } .wrapper { position: relative; display: flex; align-items: center; justify-content: center; height: 100vh; } .myclass { width: 100%; height: 100%; display: flex; align-items: center; justify-content: center; } .before { display: flex; align-items: center; justify-content: center; transition: all 2s ease-in-out; width: 50%; height: 50%; position: fixed; font-size: 50px; } .before.active { left: 20px; bottom: 20px; width: 40px; height: 40px; font-size: 15px; position: fixed; transform: translate(0, 0); } <div class="wrapper"> <div id="test" class="test"></div> <div class="myclass"> <img src="./logo.svg" id="icon-img" class="before"></img> </div> </div>
How to transition text along with a circle in CSS animation?
A problem is that some properties do not have an initial value set so there is nothing for them to transition from. So you get a sort of jump effect. This snippet removes the flex used for centering the image and instead uses left and bottom in conjunction with translation to center it initially. <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title></title> <style type="text/css"> .test { position: fixed; left: 0; bottom: 0; width: 40px; display: flex; align-items: center; justify-content: center; height: 40px; transition: all 3s ease; background: gray; transform: scale(100); border-radius: 30px; left: 20px; bottom: 20px; } .test.active { transform: scale(1); transition: all 2s ease; left: 20px; bottom: 20px; } .wrapper { position: relative; display: flex; align-items: center; justify-content: center; height: 100vh; } .myclass { width: 100%; height: 100%; } .before { display: flex; align-items: center; justify-content: center; transition: all 2s ease-in-out; width: 50%; height: 50%; position: fixed; font-size: 50px; left: 50%; bottom: 50%; transform: translate(-50%, 50%); background: pink; } .before.active { left: 20px; bottom: 20px; width: 40px; height: 40px; font-size: 15px; position: fixed; transform: translate(0, 0); } </style> </head> <body> <div class="wrapper"> <div id="test" class="test"></div> <div class="myclass"> <img src="./logo.svg" id="icon-img" class="before"></img> </div> </div> <script> setTimeout(function() { let i = document.getElementById("test"); let d = document.getElementById("icon-img"); i.classList.add("active"); d.classList.add("active"); }, 2000); </script> </body> </html> Note: the image is given a background of pink as no actual image was supplied, just so we can see its position and size. Two transition times are used in the question - 2s and 3s. This means the image and text arrive at different times. I've kept that in the snippet but maybe the same time was intended?
76383756
76395526
I want to change database dynamically based on request origin. I create a globalMiddleware which is called on every routes. // middlwares/global.middleware.js import DBController from "../controllers/db.controller.js"; import { db1, db2 } from "../prisma/prismaClient.js"; export default (req, res, next) => { const dbcontroller = DBController(); const domain = req.get("origin"); switch (domain) { case "http://localhost:3000": dbcontroller.setDB(db1); break; case "http://localhost:3001": dbcontroller.setDB(db2); break; } next(); }; but when i set the db inside DBController by calling dbcontroller.setDB() method and finally calling this.DB it is undefined. // controller/db.controller.js import autoBind from "auto-bind"; class DBController { constructor() { this.DB; autoBind(this); } setDB(prismaClient) { this.DB = prismaClient; } } export default DBController; // conrtoller/controller.js import { generateResponse } from "./../util/public.util.js"; import DBController from "./db.controller.js"; import autoBind from "auto-bind"; import createError from "http-errors"; class Controller extends DBController { constructor() { super(); this.generateResponse = generateResponse; this.createError = createError; autoBind(this); } } export default Controller; // controller/article.controller.js import Controller from "./controller.js"; class ArticleController extends Controller { async get(req, res, next) { try { const articles = await this.DB.article.findMany(); //this.DB is undefined const response = this.generateResponse("success", articles); res.send(response); } catch (error) { next(error); } } } export default new ArticleController(); I don't know how should i set a global DB inside a top-level controller which can be used every where. I also try js global.db vars and express app.set("db",db1) but i think these are not a good solution for this work.
change database based on request origin using expressjs & prisma
finally I modify global.middleware.js file and modify request instead of setting database in a high-level controller : import { prisma_aramgostar, prisma_karen } from "../prisma/prismaClient.js"; export default async(req, res, next) => { const domain = await req.get("x-forwarded-host"); switch (domain) { case "localhost:3000": req.DB = prisma_aramgostar; console.log("db: aramgostar"); break; case "127.0.0.1:3001": req.DB = prisma_karen; console.log("db: karen"); break; } next(); };
76388569
76394654
My problem is that I have a modal and the input contains the value from the database. When I press the update button, the unique name validation check if the name already exists in the database. However, if I don't change it, it also checks the unique name validation, which I don't want to happen. Here is the form. <form action="<?= base_url('users/createRole'); ?> " method="post"> <div class="modal-body"> <input type="hidden" name="roleID" id="roleID"> <div class="mb-3"> <label for="inputRoleName" class="form-label">Rolle hinzufügen</label> <input type="text" class="form-control" id="inputRoleName" name="inputRoleName" placeholder="Role Name"> </div> </div> <div class="modal-footer"> <button type="submit" class="btn btn-primary">Rolle speichern</button> <button type="button" class="btn btn-light" data-bs-dismiss="modal">Schließen</button> </div> </form> $(".btnEditRole").click(function() { const roleID = $(this).data('id'); const inputRoleName = $(this).data('role'); $('#modalTitle').html('Update Data Role'); $('.modal-footer button[type=submit]').html('Update rolle'); $('.modal-content form').attr('action', '<?= base_url('users/updateRole') ?>'); $('#roleID').val(roleID); $('#inputRoleName').val(inputRoleName); $('.modal').on('hidden.bs.modal', function() { location.reload(); // Refresh the page when modal is closed }); }); Here is the Function. public function updateRole() { if (!$this->validate(['inputRoleName' => ['rules' => 'is_unique[user_role.role_name]']])) { session()->setFlashdata('notif_error', '<b>Das Hinzufügen eines neuen Benutzers ist fehlgeschlagen</b> Der Benutzer existiert bereits! '); return redirect()->to(base_url('users')); } $updateRole = $this->userModel->updateRole($this->request->getPost(null, FILTER_UNSAFE_RAW)); if ($updateRole) { session()->setFlashdata('notif_success', '<b>Benutzerdaten erfolgreich aktualisieren</b> '); return redirect()->to(base_url('users')); } else { session()->setFlashdata('notif_error', '<b>Benutzerdaten konnten nicht aktualisiert werden</b> '); return redirect()->to(base_url('users')); } }
Codeigniter 4 unique name validation
When using unique values, you check them in this way: (this assumes your AutoIncrement/Primary is roleID For updating values change this: ['rules' => 'is_unique[user_role.role_name]']])) { to this: ['rules' => 'is_unique[user_role.role_name,roleID,{roleID}]])) { More information on placeholders can be found here: https://codeigniter.com/user_guide/libraries/validation.html?highlight=is_unique#validation-placeholders
76395784
76395864
Trying to reverse the array. And then print it I am trying to reverse the array by indenting from last to first element of array and copy it to another array. arra will be the reversed array : Here's my code : #include <stdio.h> #include <stdlib.h> int main() { int num, *arr, i; scanf("%d", &num); arr = (int *)malloc(num * sizeof(int)); for (i = 0; i < num; i++) { scanf("%d", arr + i); } int arra[] = {}; /* Write the logic to reverse the array. */ arra[0] = *(arr + num); for (int k = 1; k == num; k++) { int j = 1; arra[j] = *(arr + num -k); j++; } for (i = 0; i < num; i++) printf("%d ", *(arra + i)); return 0; }
Reversing array in C with pointers
For starters this declaration int arra[] = {}; is invalid in C and C++. You should define the array also allocating dynamically memory for it as for the first array or in the worst case you could define a variable length array (provided that the compiler supports VLAs) like int arra[num]; This for loop for(int k = 1; k == num; k++){ will iterate only one time when num initially is set to 1. Also within the body of the loop the variable j is created in each iteration of the loop (provided that the condition of the loop will be updated correctly) with the value 1 int j = 1; arra[j] = *(arr + num -k); j++; Thus this statement arra[j] = *(arr + num -k); in each iteration of the loop is equivalent to arra[1] = *(arr + num -k); that does not make sense. To copy one array into another array using pointers the for loop can look for example the following way for ( int *src = arr, *dsn = arra + n; src != arr + n; ++src ) { *--dsn = *src; }
76391514
76395561
I have to write algorithm that will generate all possible combinations of different binary strings but under some conditions. The combinations are created by: Replacing binary "1" with "00" Other conditions: Input binary string, if contains 0, they are in pairs always, so "00" The output also can contain 0 only in pairs Example: Input: 11 Output: 001 100 0000 11 In above example, there is no 010, because as mentioned earlier, the "0" must have a pair (another "0") Note that if given binary string contains "00", we don't change them to 1. In other words, the algorithm should determine how many different binary strings can be created by replacing "1" with "00" (but under the conditions present above), for given binary string and returns all the possible combinations. I tried O(n^2) algorithm, recursion but can't achieve my goal :/ That's my code: void get_combinations(const std::string& bin, std::set<std::string>& result) { result.insert(bin); for (int i = 0; i < bin.length(); i++) { std::string local_combination = bin; for (int j = i; j < bin.length(); j++) { if (local_combination[j] == '1') { local_combination[j] = '0'; local_combination.insert(j, "0"); result.insert(local_combination); } } } } It works e.g. for simple input 10, 01. But for input 11, the output doesn't contain 0000. For "longer" inputs, like 1111 it gives completely bad output.
Generate all possible combinations basing on binary string and under some conditions
Fundamentally your combinations are built up like a tree. The units are either (0) (1) / \ or / \ 0 1 00 where (.) signifies what was in the original binary string and the strings at the bottom are what you would add as a result. So, like any binary search tree you can either do the equivalent of BFS (breadth-first-search): deal with all the possibilities at one level before moving to the next, or DFS (depth-first-search): recursively work down each branch to the bottom to insert a new combination string. The two approaches are illustrated for your problem in the code below. #include <iostream> #include <string> #include <set> using namespace std; //====================================================================== set<string> BFS( const string &str ) { set<string> result; if ( str.length() == 0 ) return result; result.insert( str.substr( 0, 1 ) ); if ( str[0] == '1' ) result.insert( "00" ); for ( int i = 1; i < str.length(); i++ ) { auto last = result; result.clear(); for ( const string &s : last ) { result.insert( s + str[i] ); if ( str[i] == '1' ) result.insert( s + "00" ); } } return result; } //====================================================================== void DFS( const string &left, const string &right, set<string> &result ) { if ( right.length() == 0 ) { result.insert( left ); } else { DFS( left + right[0], right.substr( 1 ), result ); if ( right[0] == '1' ) DFS( left + "00", right.substr( 1 ), result ); } } //====================================================================== int main() { string str; cout << "Enter a binary string: "; cin >> str; cout << "BFS:\n"; for ( const string &s : BFS( str ) ) cout << s << '\n'; cout << "\nDFS:\n"; set<string> result; DFS( "", str, result ); for ( string s : result ) cout << s << '\n'; } Output for 1111 BFS: 00000000 0000001 0000100 000011 0010000 001001 001100 00111 1000000 100001 100100 10011 110000 11001 11100 1111 DFS: 00000000 0000001 0000100 000011 0010000 001001 001100 00111 1000000 100001 100100 10011 110000 11001 11100 1111
76394580
76394668
Duplicate buttons are in my xml file which is fragment_employee2 I'm only creating one xml file to list out all the list in one xml Here is the code in my https://gist.github.com/Umen14/8d1a205f016fa970369d37f37d4bf15d And here are the images:- enter image description here I tried by removing here and there but the code seems to be the same but there is a problem in the onclick listener in my EmployeeFragment.java enter image description here
Duplicate Buttons in xml file using android studio java
This is showing duplicate because you are using same xml in recyclerView item as well as activity layout. which R.layout.fragment_employee2 in your case. You need to define different layout in EmployeeAdapter without button. This will resolve you issue.
76389676
76395916
I am trying to get a WebSocket connection going in my WASM application. I have followed the MSDN tutorial and enabled WebSockets in my Program.cs: app.UseWebSockets(); After that, I added a new controller like this: [AllowAnonymous] [ApiController] [Route("[controller]")] internal class ShellyPlusDataController : ControllerBase { [HttpGet] [Route("[controller]/OnDataReceived")] public async Task OnDataReceived() { if (HttpContext.WebSockets.IsWebSocketRequest) { using var webSocket = await HttpContext.WebSockets.AcceptWebSocketAsync(); var buffer = new byte[1024 * 4]; WebSocketReceiveResult result = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None); while (!result.CloseStatus.HasValue) { string raw = Encoding.UTF8.GetString(buffer, 0, result.Count); result = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None); } await webSocket.CloseAsync(result.CloseStatus.Value, result.CloseStatusDescription, CancellationToken.None); } else { HttpContext.Response.StatusCode = StatusCodes.Status400BadRequest; } } } Opening a connection to 'wss://localhost:7220/ShellyPlusData/OnDataReceived' with PostMan doesn't work. The error displayed is: Error: Unexpected server response: 200 I have placed a breakpoint at the start of OnDataReceived(); it is never hit. I have also tried changing the URL to ws:// or omitting the method name, but no success. The Microsoft tutorial also suggests this in Program.cs: app.Use(async (context, next) => { if (context.Request.Path == "/ws") { if (context.WebSockets.IsWebSocketRequest) { using var webSocket = await context.WebSockets.AcceptWebSocketAsync(); await Echo(webSocket); } else { context.Response.StatusCode = StatusCodes.Status400BadRequest; } } else { await next(context); } }); Doesn't work either. Any suggestions?
Cannot connect to WebSocket controller in .Net Core Blazor WASM application
So I had to figure it out on my own. There were 2 things that needed to be fixed: Local WebSocket connection doesn't work. I don't know what needs to be configured to enable WebSockets in local debug mode, but when I deploy the application to a remote IIS server, I can establish a connection. My controller class was internal and routing seems to work differently when working with WS connections. Here's the working class: public class ShellyPlusDataConnectionController : ControllerBase { [HttpGet("/ShellyPlusDataConnection/OnDataReceived")] public async Task OnDataReceived() { if (HttpContext.WebSockets.IsWebSocketRequest) { using var webSocket = await HttpContext.WebSockets.AcceptWebSocketAsync(); var buffer = new byte[1024 * 4]; WebSocketReceiveResult result = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None); while (!result.CloseStatus.HasValue) { string raw = Encoding.UTF8.GetString(buffer, 0, result.Count); result = await webSocket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None); } await webSocket.CloseAsync(result.CloseStatus.Value, result.CloseStatusDescription, CancellationToken.None); } else { HttpContext.Response.StatusCode = StatusCodes.Status400BadRequest; } } }
76395827
76395933
Why the button event is not avalible with pysimplegui? This is my Code. import os import threading import PySimpleGUI as gui from rsa_controller import decryptwithPrivatekey, loadPublicKey, loadPrivateKey id = 0 target_id = 0 prikey = None def popout(title): gui.popup(title) def read_keys(): print("Opening key files: " + os.getcwd() + "\\keys\\") pubkey = loadPublicKey(os.getcwd() + "\\keys\\public.pem") prikey = loadPrivateKey(os.getcwd() + "\\keys\\private.pem") def recv_msg(): global target_id from main import s while True: data = s.recv(1024) if not data: break decoded_data = data.decode('utf-8') if decoded_data == 'target_connected_Success': print("Received message:", decoded_data) elif decoded_data.startswith("!+@"): target_id = decoded_data[3:] window2() elif decoded_data == 'target_connect_denied': gui.popup('Connection request denied') else: msg_to_recv = decryptwithPrivatekey(decoded_data, prikey) print("Received message:", msg_to_recv) def window2(): from main import s global target_id layout2 = [ [gui.Text('Connecting with'), gui.Text(str(target_id), key='target_id'), gui.Text("Establishing contact")], [gui.Button('Accept and share my public key', key='accept', enable_events=True), gui.Button('Deny connection invitation', key='denied', enable_events=True)] ] window = gui.Window("Connection Request", layout2, finalize=True) while True: event2, values2 = window.read() if event2 == gui.WINDOW_CLOSED: break if event2 == 'Deny connection invitation': print("Connection denied") s.send('!x!{}'.format(target_id).encode('utf-8')) window.close() if event2 == 'Accept and share my public key': print("Accepting and sharing public key") # Handle the logic for accepting the connection window.close() def start_GUI_progress(id): from main import s read_keys() layout = [ [gui.Text('Your identification code'), gui.Text(id)], [gui.Text('Hint: Please enter the identification code of the person you want to connect to in the input box below and click the Connect button')], [gui.Input(key='target_id'), gui.Button('Connect', key='connect')] ] window = gui.Window("RSA Encrypted Chat Software", layout) host = "localhost" port = 23333 s.connect((host, port)) print(s.recv(1024)) t_recv = threading.Thread(target=recv_msg) t_recv.start() s.send(b"__!" + str(id).encode('utf-8')) while True: event, values = window.read() if event is None: break if event == 'connect': print("Client is attempting to connect to: {}".format(values['target_id'])) message = "_!?{}".format(values['target_id']) s.send(message.encode('utf-8')) window.close() I found that the first window is intractivable,but after the window2 successfully displayed,i press buttons on it and nothing happend,what's more,when the window2 displaying, there is an error: Exception in thread Thread-1 (recv_msg): Traceback (most recent call last): File "C:\Users\bao\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner self.run() File "C:\Users\bao\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "C:\Users\bao\PycharmProjects\RSAEncryptedChatSoftware\GUIDisplay.py", line 32, in recv_msg window2() File "C:\Users\bao\PycharmProjects\RSAEncryptedChatSoftware\GUIDisplay.py", line 46, in window2 window = gui.Window("Connecting with", layout2, finalize=True) File "C:\Users\bao\PycharmProjects\RSAEncryptedChatSoftware\venv\lib\site-packages\PySimpleGUI\PySimpleGUI.py", line 9614, in init self.Finalize() File "C:\Users\bao\PycharmProjects\RSAEncryptedChatSoftware\venv\lib\site-packages\PySimpleGUI\PySimpleGUI.py", line 10300, in finalize self.Read(timeout=1) File "C:\Users\bao\PycharmProjects\RSAEncryptedChatSoftware\venv\lib\site-packages\PySimpleGUI\PySimpleGUI.py", line 10075, in read results = self._read(timeout=timeout, timeout_key=timeout_key) File "C:\Users\bao\PycharmProjects\RSAEncryptedChatSoftware\venv\lib\site-packages\PySimpleGUI\PySimpleGUI.py", line 10146, in _read self._Show() File "C:\Users\bao\PycharmProjects\RSAEncryptedChatSoftware\venv\lib\site-packages\PySimpleGUI\PySimpleGUI.py", line 9886, in Show StartupTK(self) File "C:\Users\bao\PycharmProjects\RSAEncryptedChatSoftware\venv\lib\site-packages\PySimpleGUI\PySimpleGUI.py", line 16935, in StartupTK window.TKroot.mainloop() File "C:\Users\bao\AppData\Local\Programs\Python\Python310\lib\tkinter_init.py", line 1458, in mainloop self.tk.mainloop(n) RuntimeError: Calling Tcl from different apartment Process finished with exit code 0
PySimpleGUI button event not working in Python code - why?
layout2 = [ [gui.Text('Connecting with'), gui.Text(str(target_id), key='target_id'), gui.Text("Establishing contact")], [gui.Button('Accept and share my public key', key='accept', enable_events=True), gui.Button('Deny connection invitation', key='denied', enable_events=True)] ] if event2 == 'Deny connection invitation': print("Connection denied") s.send('!x!{}'.format(target_id).encode('utf-8')) window.close() if event2 == 'Accept and share my public key': print("Accepting and sharing public key") # Handle the logic for accepting the connection The keys defined for buttons are different from the events used in the event loop. keys defined for buttons:'accept' and 'denied'. The events: 'Accept and share my public key' and 'Deny connection invitation'. t_recv = threading.Thread(target=recv_msg) def recv_msg(): # ... wrong, try to call `window.write_event_value` to generate an event to call `window2` in main thread. window2() # ... wrong, try to call `window.write_event_value` to generate an event to call `gui.popup` in main thread. gui.popup('Connection request denied') # ... GUI should be run under main thread !
76391959
76395790
my splash code this class App extends StatelessWidget { const App(); @override Widget build(BuildContext context) { Get.put(SplashController()); Get.put(ThemeController()); Get.put(HomeController()); Get.put(LocaleController()); var localeController = Get.find<LocaleController>(); print('amirrrrrrrrrrr${localeController.locale}'); return GetMaterialApp( debugShowCheckedModeBanner: false, translations: LocaleString(), locale: localeController.locale, initialBinding: MyBindings(), home: Splash(), ); } } after update locale in controller class LocaleController extends GetxController { Locale locale = const Locale('fa', 'FA'); Future<void> saveLocale(Locale newLocale) async { SharedPreferences prefs = await SharedPreferences.getInstance(); await prefs.setString('languageCode', newLocale.languageCode); await prefs.setString('countryCode', newLocale.countryCode.toString()); locale = newLocale; update(); print('amirrrrrrr$locale'); } Future<Locale> loadLocale() async { SharedPreferences prefs = await SharedPreferences.getInstance(); String? languageCode = prefs.getString('languageCode'); String? countryCode = prefs.getString('countryCode'); Locale? locale; if (languageCode != null && countryCode != null) { locale = Locale(languageCode, countryCode); this.locale = locale; } update(); print('amirrrrrrr$locale'); return locale!; } } this code not updated var localeController = Get.find<LocaleController>(); print('amirrrrrrrrrrr${localeController.locale}'); how to fix ? It is updated in the LocaleController controller, but in the app class, it always returns fa_FA and does not show the updated locale.
Flutter: How to update locale in GetxController
As documentation mentioned you need to call; getx Get.changeLocale(Locale("pt")); You need to call this method with expected Locale. Additionally, current locale can be checked with; Get.locale;
76395847
76395972
I have a code like below. var t = [1, 2]; t[2] = t; This creates a circular array. What is the algorithm applied to create this circular array in javascript.
What is the algorithm applied to create a circular array in javascript?
There's no algorithm involved. It's just that your array refers to itself. When a variable or property refers to an object (arrays are objects), what's held in the variable is an object reference, which is a value that tells the JavaScript engine where that object is elsewhere in memory. You can think of it as a number that uniquely identifies a memory location where the object is. (That isn't what it is, but it's a handy way to think of it.) Let's look at the code. You start with: var t = [1, 2]; That creates an array and stores an object reference for it in t. That creates something somewhat like this in memory (various details omitted for clarity): +−−−−−−−−−−−−−−−+ t:{Ref18465}−−−−−>| (array) | +−−−−−−−−−−−−−−−+ | length: 2 | | 0: 1 | | 1: 2 | +−−−−−−−−−−−−−−−+ The Ref18465 I've shown is a stand-in for the object reference, which we never directly see in code. We have an array, and a variable containing an object reference saying where the array is. Then your code does this: t[2] = t; That adds a new element to the array containing the object reference of the array, making it refer to itself: +−−−−−−−−−−−−−−−−−−−−−−+ | | \ +−−−−−−−−−−−−−−−+ | t:{Ref18465}−−−+−>| (array) | | +−−−−−−−−−−−−−−−+ | | length: 3 | | | 0: 1 | | | 1: 2 | | | 2: {Ref18465} |>−/ +−−−−−−−−−−−−−−−+ Now, both t and t[2] contain the object reference to the array. As you say, the array refers to itself. There's no particular algorithm involved, it's just a circular data structure.
76394621
76394679
I am writing code to upload file on file server along with two other string variables as part of HTTP post request. Idea to use multi interface here is to upload multiple file in future. Libcurl version being used here is: 7.44 Here is my program: #include <iostream> #include <string> #include <curl/curl.h> const auto TimeoutInMS = 1000; const auto FileDescriptorZero = 0; bool HTTPPostSuccessful(long httpResponseCode) { bool httpRequestSuccessful = false; if (httpResponseCode == 200) { httpRequestSuccessful = true; } return httpRequestSuccessful; } size_t WriteCallback(void * buffer, size_t size, size_t count, void * userp) { size_t numBytes = size * count; static_cast<std::string*>(userp)->append(static_cast<char*>(buffer), numBytes); return numBytes; } void HTTPPost(const std::string& value1, const std::string& value2, const std::string& filePath) { struct curl_slist *pHTTPRequestHeaders = nullptr; struct curl_httppost* pFormpost = nullptr; struct curl_httppost* pLastptr = nullptr; uint16_t httpResponseCode = 0; int stillSendingFile = 0; CURL* pCurlEasyHandle = curl_easy_init(); CURLM *pCurlMultiHandle = curl_multi_init(); std::string responseData{}; if (pCurlEasyHandle && pCurlMultiHandle) { pHTTPRequestHeaders = curl_slist_append(pHTTPRequestHeaders, "Content-Type: multipart/form-data"); curl_easy_setopt(pCurlEasyHandle, CURLOPT_HTTPHEADER, pHTTPRequestHeaders); curl_easy_setopt(pCurlEasyHandle, CURLOPT_URL, "https://http_.org/logs/readers"); curl_easy_setopt(pCurlEasyHandle, CURLOPT_VERBOSE, 1L); curl_formadd(&pFormpost, &pLastptr, CURLFORM_COPYNAME, "fileName", CURLFORM_FILE, filePath.c_str(), CURLFORM_CONTENTTYPE, "text/csv", CURLFORM_END); curl_formadd(&pFormpost, &pLastptr, CURLFORM_COPYNAME, "Value1", CURLFORM_COPYCONTENTS, value1.c_str(), CURLFORM_END); curl_formadd(&pFormpost, &pLastptr, CURLFORM_COPYNAME, "Value2", CURLFORM_COPYCONTENTS, value2.c_str(), CURLFORM_END); curl_easy_setopt(pCurlEasyHandle, CURLOPT_HTTPPOST, pFormpost); curl_easy_setopt(pCurlEasyHandle, CURLOPT_WRITEFUNCTION, WriteCallback); curl_easy_setopt(pCurlEasyHandle, CURLOPT_WRITEDATA, &responseData); curl_multi_add_handle(pCurlMultiHandle, pCurlEasyHandle); do { curl_multi_perform(pCurlMultiHandle, &stillSendingFile); if (stillSendingFile) { curl_multi_wait(pCurlMultiHandle, nullptr, FileDescriptorZero, TimeoutInMS, nullptr); } } while(stillSendingFile); CURLcode res = curl_easy_getinfo(pCurlEasyHandle, CURLINFO_RESPONSE_CODE, &httpResponseCode); if (HTTPPostSuccessful(httpResponseCode) && res == CURLE_OK) { std::cout << "File sent Successfully, HTTP response code: " << httpResponseCode << ", ResponseData: "<< responseData<< std::endl; } else { std::cerr << "Error during request: " << curl_easy_strerror(res) << ", Failure HTTP response code: " << httpResponseCode << std::endl; } if (pCurlMultiHandle && pCurlEasyHandle) { std::cout << "Clean up for curl_multi_remove_handle " << std::endl; curl_multi_remove_handle(pCurlMultiHandle, pCurlEasyHandle); } if (pCurlEasyHandle) { std::cout << "Clean up for pCurlEasyHandle " << std::endl; curl_easy_cleanup(pCurlEasyHandle); pCurlEasyHandle = nullptr; } if (pCurlMultiHandle) { std::cout << "Clean up for pCurlMultiHandle " << std::endl; curl_multi_cleanup(pCurlMultiHandle); pCurlMultiHandle = nullptr; } if (pHTTPRequestHeaders) { std::cout << "Clean up pHTTPRequestHeaders " << std::endl; curl_slist_free_all(pHTTPRequestHeaders); pHTTPRequestHeaders = nullptr; } if (pFormpost) { std::cout << "Clean up pFormpost " << std::endl; curl_formfree(pFormpost); pFormpost = nullptr; pLastptr = nullptr; } } } bool UploadFile(const std::string& value1, const std::string& value2, const std::string& filePath) { curl_global_init(CURL_GLOBAL_ALL); HTTPPost(value1, value2, filePath); curl_global_cleanup(); return true; } int main () { UploadFile("1", "1", "/tmp/UploadFIle/testDoc.txt"); } above code works fine in my machine, when I commit it on Jenkins build server, I get following error: 7: HTTPClient Initialization is successful. 7: * Could not resolve host: http_.org 7: * Closing connection 0 7: Error during request: No error, Failure HTTP response code: 0 7: Clean up for curl_multi_remove_handle 7: Clean up for pCurlEasyHandle 7: Clean up for pCurlMultiHandle 7: Clean up pHTTPRequestHeaders 7: Clean up pFormpost 7/30 Test #7: **** ....................................***Exception: SegFault 0.29 sec In Few cases also seeing following error: 8: HTTPClient Initialization is successful. 8: * Could not resolve host: http_.org 8: * Closing connection 0 8: Error during request: No error, Failure HTTP response code: 0 8: Clean up for curl_multi_remove_handle 8: Clean up for pCurlEasyHandle 8: Clean up for pCurlMultiHandle 8: Clean up pHTTPRequestHeaders 8: Clean up pFormpost 8: ==11267== Invalid read of size 8 8: ==11267== at 0x4E48018: curl_formfree (in /usr/lib/x86_64-linux-gnu/libcurl.so.4.4.0) Can somebody please inform me what is wrong here, new to Curl.
curl_formfree not working properly return error: SegFault
The first issue resulting in UB is in WriteCallback: static_cast<std::string*>(userp)->append(static_cast<char*>(buffer), 0, numBytes); You have chosen the overloaded member function basic_string& append( const basic_string& str, size_type pos, size_type count ) that creates std::string from a not null-terminated data in the buffer. You should use the other overloaded member function basic_string& append( const CharT* s, size_type count ) therefore the correct call is static_cast<std::string*>(userp)->append(static_cast<char*>(buffer), numBytes); The second issue is uint16_t httpResponseCode, whereas CURLINFO_RESPONSE_CODE requires a pointer to a long value, you pass &httpResponseCode, a pointer to a short value, this is yet another UB. Particularly it corrupts data in the local variables, writes zeros in 2 or 6 bytes outside the httpResponseCode storage, probably in pFormpost bytes. It should be long httpResponseCode = 0;
76391583
76395798
I've a rest controller and one of the endpoint looks like this: @PostMapping(value = "/myapi/{id}", produces = APPLICATION_JSON_VALUE, consumes = APPLICATION_JSON_VALUE) public ResponseEntity<MyEntity> myApi( @Valid @PathVariable("id") @NotBlank String id, @Valid @RequestBody MyRequestPayload myRequestPayload) throws Exception) { LOGGER.info("Id is {}",id); ............... ......................... ............................. } For some reason, when I call the API with an empty or null path variable and a request payload, the path variable is not failing the validation and the control comes inside the method block. What am I doing wrong? Kindly advise.
Spring Rest Controller not able to validate path variable when request body is also passed in addition to path variable
@Valid validates complex objects, containing fields annotated with constraint annotations. For this case, you need to use @Validated: The @Validated annotation is a class-level annotation that we can use to tell Spring to validate parameters that are passed into a method of the annotated class. So mark your controller class as @Validated, which would trigger the validation of the id path variable. @RestController @RequestMapping("/my") @Validated public class MyController { @PostMapping(value = "/myapi/{id}", produces = APPLICATION_JSON_VALUE, consumes = APPLICATION_JSON_VALUE) public ResponseEntity<MyEntity> myApi( @PathVariable("id") @NotBlank String id, @Valid @RequestBody MyRequestPayload myRequestPayload) throws Exception { LOGGER.info("Id is {}", id); } } Reference: Validation with Spring Boot
76394660
76394688
I have the following algebraic data type: data Tree a = Empty | Node a (Tree a) (Tree a) deriving (Show, Eq) Also, I have this code snippet: fromJust :: Maybe a -> a fromJust (Just val) = val fromJust Nothing = error "Cannot unpack Nothing." getTreeMinimum :: Ord a => Tree a -> Maybe a getTreeMaximum :: Ord a => Tree a -> Maybe a getTreeMaximum Empty = Nothing getTreeMaximum (Node value l r) = if l == Empty && r == Empty then Just value else if l == Empty && r /= Empty then if value < fromJust (getTreeMinimum r) then (getTreeMaximum r) else if l /= Empty && r == Empty then if fromJust (getTreeMaximum l) < value then Just (value) else if l /= Empty && r /= Empty then if fromJust (getTreeMaximum l) < value && value < fromJust (getTreeMinimum r) then (getTreeMaximum r) else Nothing getTreeMinimum Empty = Nothing getTreeMinimum (Node value l r) = if l == Empty && r == Empty then Just value else if l == Empty && r /= Empty then if value < fromJust (getTreeMinimum r) then Just (value) else if l /= Empty && r == Empty then if fromJust (getTreeMaximum l) < value then (getTreeMinimum l) if l /= Empty && r /= Empty then if fromJust (getTreeMaximum l) < value && value < fromJust (getTreeMinimum r) then (getTreeMinimum l) else Nothing isOrderedHelper :: Ord a => Tree a -> Bool isOrderedHelper Empty = True isOrderedHelper (Node nodeValue leftChild Empty) = if isOrderedHelper leftChild == False then False else (fromJust (getTreeMaximum leftChild)) < nodeValue isOrderedHelper (Node nodeValue Empty rightChild) = if isOrderedHelper rightChild == False then False else nodeValue < fromJust ((getTreeMinimum rightChild)) isOrderedHelper (Node nodeValue leftChild rightChild) = if isOrderedHelper leftChild == False || isOrderedHelper rightChild == False then False else fromJust (getTreeMaximum leftChild) < nodeValue && nodeValue < fromJust (getTreeMinimum rightChild) isOrdered :: Ord a => Tree a -> Bool isOrdered Empty = True isOrdered tree = isOrderedHelper tree The above gives me: error: parse error on input 'getTreeMinimum' getTreeMinimum Empty = Nothing ^^^^^^^^^^^^^^ Failed, no modules loaded. I have two questions (the second one is optional): How to fix the compile time error? Is it possible to improve the efficiency of the function in question?
Fixing a function checking whether the input binary tree is ordered
Since if is an expression in Haskell each if has to have exatly one then and one else but this getTreeMaximum (Node value l r) = if l == Empty && r == Empty then Just value else if l == Empty && r /= Empty then if value < fromJust (getTreeMinimum r) then (getTreeMaximum r) else if l /= Empty && r == Empty then if fromJust (getTreeMaximum l) < value then Just (value) else if l /= Empty && r /= Empty then if fromJust (getTreeMaximum l) < value && value < fromJust (getTreeMinimum r) then (getTreeMaximum r) else Nothing has 7 if and 7 then but only 4 else I would probably write that using pattern matching to avoid such a deeply nested if tree: getTreeMaximum (Node value Empty Empty) = Just value getTreeMaximum (Node value Empty r) = if value < fromJust (getTreeMinimum r) then (getTreeMaximum r) else Nothing getTreeMaximum (Node value l Empty) = if fromJust (getTreeMaximum l) < value then Just (value) else Nothing getTreeMaximum (Node value l r) = if fromJust (getTreeMaximum l) < value && value < fromJust (getTreeMinimum r) then (getTreeMaximum r) else Nothing But I don't think the clauses other than the getTreeMaximum Empty have any reason to return Nothing at all since there's always a value, so you'll have to adjust these.
76395953
76395988
I'm trying to parse a To email header with a regex. If there are no <> characters then I want the whole string otherwise I want what is inside the <> pair. import re re_destinatario = re.compile(r'^.*?<?(?P<to>.*)>?') addresses = [ 'XKYDF/ABC (Caixa Corporativa)', 'Fulano de Tal | Atlantica Beans <[email protected]>' ] for address in addresses: m = re_destinatario.search(address) print(m.groups()) print(m.group('to')) But the regex is wrong: ('XKYDF/ABC (Caixa Corporativa)',) XKYDF/ABC (Caixa Corporativa) ('Fulano de Tal | Atlantica Beans <[email protected]>',) Fulano de Tal | Atlantica Beans <[email protected]> What am I missing?
Regex to catch email addresses in email header
You may use this regex: <?(?P<to>[^<>]+)>?$ RegEx Demo RegEx Demo: <?: Match an optional < (?P<to>[^<>]+): Named capture group to to match 1+ of any characters that are not < and > >?: Match an optional > $: End Code Demo Code: import re re_destinatario = re.compile(r'<?(?P<to>[^<>]+)>?$') addresses = [ 'XKYDF/ABC (Caixa Corporativa)', 'Fulano de Tal | Atlantica Beans <[email protected]>' ] for address in addresses: m = re_destinatario.search(address) print(m.group('to')) Output: XKYDF/ABC (Caixa Corporativa) [email protected]
76396013
76396051
I have two tables in MySql i.e., subjects and photos and I wished to count the number of photos on each subjects SELECT a.id, a.name, count(a.id) as `refcount`, FROM `subjects` a LEFT JOIN `photos` b ON (a.id = b.subject_id) GROUP by a.id ORDER BY a.name"; returns 1 even when the rowcount()=0. How to fix it I tried various MySql syntax including count(field), but in vain
MySql count with GROUP BY returns 1 even when count is 0
You will need to count photos.id (b.id), if no photos are found for the given subject, the query will return null, count(null) = 0. SELECT a.id, a.name, count(b.id) as `refcount` FROM `subjects` a LEFT JOIN `photos` b ON a.id = b.subject_id GROUP by a.id, a.name ORDER BY a.name;
76383009
76396573
I would like to construct a type from this object: const isSynchronized: Record<SynchronizableField, boolean> = { /* synchronized */ surveyMacroEnvironments: true, coordinateReferenceSystemCrs: true, transactionType: true, epsgTransformation: true, startingAgreementDate: true, expirationAgreementDate: true, transactionTypeNotes: true, surveyDataType: true, /* not synchronized */ surveyName: false, validationStateCd: false, legacy: false, notifyOnCreate: false, notifyOnValidate: false, finalReportLink: false, // timestamp fields creationDate: false, lastUpdate: false, // continent and country are handled differently continent: false, country: false, }; where the type needs to have only the keys with values equal to true, could you please help me or give me any suggestions? Thanks
Building a Type from an object with properties of type Boolean?
As a first step we have to remove that type annotation on isSynchronized; we need the compiler to infer its type and then use that inferred type to compute the key set you're looking for. You could use the satisfies operator instead to make sure the property types are checked against and constrained to boolean: const isSynchronized = { surveyMacroEnvironments: true, coordinateReferenceSystemCrs: true, transactionType: true, epsgTransformation: true, startingAgreementDate: true, // ✂ ⋯ ✂ lastUpdate: false, continent: false, country: false, } satisfies Record<string, boolean>; type IsSynchronized = typeof isSynchronized; Now you can inspect IsSynchronized to get the desired type. You're looking for an application of a type function I call KeysMatching<T, V>, as requested in microsoft/TypeScript#48992 and as discussed in In TypeScript, how to get the keys of an object type whose values are of a given type?. The idea is that KeysMatching<T, V> would evaluate to the union of property keys of T where the property values at those keys are assignable to V. Specifically it looks like you want KeysMatching<IsSynchronized, true>. There's no native KeysMatching provided by the language, but there are a number of ways to implement it yourself, with various issues and edge cases. One approach is a distributive object type where we map over all the properties of T and then index into the result with all the keys to end up with the union of the computed property types. Like this: type KeysMatching<T, V> = { [K in keyof T]: T[K] extends V ? K : never }[keyof T] And let's use it: type SynchronizedKeys = KeysMatching<IsSynchronized, true>; // type SynchronizedKeys = "surveyMacroEnvironments" | "coordinateReferenceSystemCrs" | // "transactionType" | "epsgTransformation" | "startingAgreementDate" | // "expirationAgreementDate" | "transactionTypeNotes" | "surveyDataType" Looks good. If you don't want to keep KeysMatching around, you can inline the definition to compute SynchronizedKeys directly: type SynchronizedKeys = { [K in keyof IsSynchronized]: IsSynchronized[K] extends true ? K : never }[keyof IsSynchronized]; Playground link to code
76394387
76394754
I have a string like this: $str = '[{"action": "verify_with_source","created_at": "2023-05-30T01:39:54+05:30","status": "in_progress","type": "license"}] {"address":null,"badge_details":null,"card_serial_no":null,"city":null,"cov_details":[{"category":"NT","cov":"MCWG","issue_date":"2021-03-30"},{"category":"NT","cov":"LMV","issue_date":"2021-03-30"}],"date_of_issue":"2021-03-30","date_of_last_transaction":"2021-03-30","dl_status":"Active","dob":"1996-10-09","face_image":null,"gender":null,"hazardous_valid_till":null,"hill_valid_till":null,"id_number":"DL1234567890","issuing_rto_name":"MY CITY","last_transacted_at":"MY CITY","name":"MY NAME","nt_validity_from":"2021-03-30","nt_validity_to":"2036-10-08","relatives_name":null,"source":"SOURCE","status":"id_found","t_validity_from":null,"t_validity_to":null}' What I want to split the string in 2 parts - [{"action": "verify_with_source",..."type": "license"}] and {"category":"NT","cov":"LMV","issue_date":"2021-03-30"}],...,"t_validity_to":null"}. I removed [ and ] with - $raw = str_replace(['[', ']'], '', $raw); Then, I have tried- $str = preg_replace('^/} {/', '}{', $raw); and $str = preg_replace('^/}\s{/', '}{', $raw); and $str = str_replace('} {', '{}', $raw); Then I intend to split string $str into array of 2 strings with statement - $arr = explode('}{', $str); The string is not being splitted and above statement returns whole string in first element of array. What is wrong with my script?
Need help in string replacement in PHP
I would suggest to make it a bit easier instead of replacing all the braces. explode can handle that for you and split the string at a place you want. So just split it at the position ] { where the license ends and the objects starts and add the missing braces afterwards to that strings. <?php $str = "<your long string>" $jsonStrings = explode('] {', $str, 1); $jsonStrings[0] .= ']'; $jsonStrings[1] = '{' . $jsonStrings[1]; be aware of there is no error handling. if there is no ] { chars in your string, explode will not create an array with two strings and the rest of the code is failing.
76395778
76396057
await testCollection.insertMany(testArray, { ordered: false }); I have this code. I found that putting { ordered: false } will prevent getting E11000 error code. But, it looks like, it does not. Is there any way that I can avoid this error? I want to skip and do not insert a doc that has the same _id code.
MongoDB insertMany skip the same _id field to avoid 'code: 11000,'
You can't avoid error, but you can proceed with inserting. See here. For example: MongoDB Enterprise replset:PRIMARY> db.products.insert( ... [ ... { _id: 20, item: "lamp", qty: 50, type: "desk" }, ... { _id: 21, item: "lamp", qty: 20, type: "floor" }, ... { _id: 21, item: "lamp", qty: 20, type: "floor" }, ... { _id: 22, item: "bulk", qty: 100 } ... ], ... { ordered: false } ... ) BulkWriteResult({ "writeErrors" : [ { "index" : 2, "code" : 11000, "errmsg" : "E11000 duplicate key error collection: newdb1.products index: _id_ dup key: { _id: 21.0 }", "op" : { "_id" : 21, "item" : "lamp", "qty" : 20, "type" : "floor" } } ], "writeConcernErrors" : [ ], "nInserted" : 3, "nUpserted" : 0, "nMatched" : 0, "nModified" : 0, "nRemoved" : 0, "upserted" : [ ] }) MongoDB Enterprise replset:PRIMARY> db.products.find() { "_id" : 20, "item" : "lamp", "qty" : 50, "type" : "desk" } { "_id" : 21, "item" : "lamp", "qty" : 20, "type" : "floor" } { "_id" : 22, "item" : "bulk", "qty" : 100 } you may see, that if you remove { ordered: false }, the only inserted records will be records before first error accured: MongoDB Enterprise replset:PRIMARY> db.products.find() { "_id" : 20, "item" : "lamp", "qty" : 50, "type" : "desk" } { "_id" : 21, "item" : "lamp", "qty" : 20, "type" : "floor" }
76391858
76397188
Thanks a lot in advance . trouble in how to generate relationship between two enumerate . %------------------------------- enum weeks = {w1,w2,w3,w4}; array[weeks] of 1..10 : weekShiftQty = [4,5,7,4]; % total shift = 20 enum shift = _(1..20); array[shift] of weeks : shiftWeek = [ % how to generate shiftWeek relationship % % just like following : calculate shift belog to which week base on weekShiftQty % shiftWeek[1] = w1 , ... , shiftWeek[4] = w1 , % shiftWeek[5] = w2 ,... , shiftWeek[9] = w2 , % shiftWeek[10] = w3 ,... , shiftWeek[16] = w3 , % shiftWeek[17] = w4 ,... , shiftWeek[20] = w4 , | s in shift ]; %-------------------------------
how to generate relationship between two enumerate with double cycle
Here's a solution, i.e. using [ w | w in weeks, _ in 1..weekShiftQty[w]] to generate the shiftWeek array: enum weeks = {w1,w2,w3,w4}; array[weeks] of 1..10 : weekShiftQty = [4,5,7,4]; % total shift = 20 enum shift = _(1..20); array[shift] of weeks: shiftWeek = [ w | w in weeks, _ in 1..weekShiftQty[w] ]; output [ "shiftWeek: \(shiftWeek)\n" ]; The output is shiftWeek: [w1, w1, w1, w1, w2, w2, w2, w2, w2, w3, w3, w3, w3, w3, w3, w3, w4, w4, w4, w4]
76394627
76394768
I am attaching my txt file, graph, and code. Can you please tell me what to change in this code, or why this third straight line is coming in my graph, because I only need two curve lines. In other software like xmgrace it's showing two curves only. import numpy as np import matplotlib.pyplot as plt deformation, potential = np.loadtxt("poten-Rf259.txt", unpack=True) plt.subplot(1,3,1) plt.plot(deformation, potential, linewidth=1, color='b') plt.xlabel("$deformation$") plt.ylabel("potential") plt.yscale("log") plt.show() 0.60 1996.95397779 0.61 1995.35525840 0.62 1993.86701437 0.63 1992.48491231 0.64 1991.19969171 0.65 1990.00485364 0.66 1988.89605689 0.67 1987.86875188 0.68 1986.91599808 0.69 1986.03420343 0.70 1985.21922699 0.71 1984.46631470 0.72 1983.77171151 0.73 1983.13117131 0.74 1982.53983312 0.75 1981.99532235 0.76 1981.49445904 0.77 1981.03536392 0.78 1980.61563713 0.79 1980.23282155 0.80 1979.88357050 0.81 1979.56605195 0.82 1979.27790062 0.83 1979.01687947 0.84 1978.78288516 0.85 1978.57209950 0.86 1978.38399957 0.87 1978.21704688 0.88 1978.06886192 0.89 1977.93793974 0.90 1977.82377479 0.91 1977.72453886 0.92 1977.64028757 0.93 1977.56966186 0.94 1977.51247702 0.95 1977.46734438 0.96 1977.43305218 0.97 1977.40866283 0.98 1977.39347411 0.99 1977.38486567 1.00 1977.38198044 1.01 1977.38578333 1.02 1977.39435293 1.03 1977.40721066 1.04 1977.42425865 1.05 1977.44721979 1.06 1977.47357918 1.07 1977.50320577 1.08 1977.53535737 1.09 1977.56925776 1.10 1977.60496583 1.11 1977.64155671 1.12 1977.67899730 1.13 1977.71674843 1.14 1977.75457845 1.15 1977.79221495 1.16 1977.82838072 1.17 1977.86261243 1.18 1977.89473051 1.19 1977.92436517 1.20 1977.95062327 1.21 1977.97352605 1.22 1977.99387754 1.23 1978.01047267 1.24 1978.02233261 1.25 1978.03018886 1.26 1978.03434229 1.27 1978.03431686 1.28 1978.02782602 1.29 1978.01489388 1.30 1977.99620759 1.31 1977.97164846 1.32 1977.94011930 1.33 1977.90108165 1.34 1977.85474975 1.35 1977.80116069 1.36 1977.73920528 1.37 1977.66702371 1.38 1977.58440415 1.39 1977.49291347 1.40 1977.39203011 1.41 1977.28170103 1.42 1977.16108588 1.43 1977.02866014 1.44 1976.88410316 1.45 1976.72847328 1.46 1976.56259216 1.47 1976.38386074 1.48 1976.20047863 1.49 1976.00349668 1.50 1975.79062208 1.51 1975.56427272 1.52 1975.32369916 1.53 1975.06525907 1.54 1974.78917861 1.55 1974.49730653 1.56 1974.19357061 1.57 1973.87528322 1.58 1973.54134915 1.59 1973.19101633 1.60 1972.82834582 1.61 1972.45110208 1.62 1972.05553210 1.63 1971.64144442 1.64 1971.20783885 1.65 1970.75735680 1.66 1970.29140464 1.67 1969.81219620 1.68 1969.31582059 1.69 1968.80021418 1.70 1968.26283644 1.71 1967.70510563 1.72 1967.13020137 1.73 1966.53988789 1.74 1965.93070810 1.75 1965.30035432 1.76 1964.64993511 1.77 1963.98214335 1.78 1963.29607120 1.79 1962.58677705 1.80 1961.85278252 1.81 1961.10290054 1.82 1960.33687553 1.83 1959.54927874 1.84 1958.74001445 1.85 1957.90788391 1.86 1957.05397043 1.87 1956.17614396 1.88 1955.26999324 1.89 1954.33375080 1.90 1953.37236582 1.91 1952.38212458 1.92 1951.35983354 1.93 1950.30220457 1.94 1949.20603907 1.95 1948.05714844 1.96 1946.84779027 1.97 1945.58008527 1.98 1944.23420931 1.99 1942.79805826 2.00 1941.26241606 2.01 1939.59635092 2.02 1937.76411055 2.03 1935.72007499 2.04 1933.39927526 2.05 1930.72795563 2.06 1927.60977402 2.07 1923.90784077 2.08 1919.45804354 2.09 1914.31285484 0.60 1998.69655342 0.61 1997.10205960 0.62 1995.61774169 0.63 1994.23972037 0.64 1992.95788603 0.65 1991.76641764 0.66 1990.66090368 0.67 1989.63677721 0.68 1988.68674207 0.69 1987.80810777 0.70 1986.99582463 0.71 1986.24553783 0.72 1985.55344269 0.73 1984.91507367 0.74 1984.32536754 0.75 1983.78293400 0.76 1983.28383613 0.77 1982.82671380 0.78 1982.40862045 0.79 1982.02768397 0.80 1981.67975264 0.81 1981.36364770 0.82 1981.07656995 0.83 1980.81699835 0.84 1980.58410256 0.85 1980.37438858 0.86 1980.18712107 0.87 1980.02126097 0.88 1979.87363292 0.89 1979.74345272 0.90 1979.62995824 0.91 1979.53135472 0.92 1979.44772023 0.93 1979.37757311 0.94 1979.32104058 0.95 1979.27599816 0.96 1979.24203421 0.97 1979.21798198 0.98 1979.20293129 0.99 1979.19417215 1.00 1979.19165696 1.01 1979.19541560 1.02 1979.20395824 1.03 1979.21615707 1.04 1979.23320721 1.05 1979.25599428 1.06 1979.28171220 1.07 1979.31101088 1.08 1979.34264374 1.09 1979.37618654 1.10 1979.41145970 1.11 1979.44782160 1.12 1979.48476237 1.13 1979.52196027 1.14 1979.55904133 1.15 1979.59588083 1.16 1979.63104564 1.17 1979.66430513 1.18 1979.69566099 1.19 1979.72388637 1.20 1979.74966849 1.21 1979.77150493 1.22 1979.79086274 1.23 1979.80610500 1.24 1979.81670006 1.25 1979.82369958 1.26 1979.82686535 1.27 1979.82547672 1.28 1979.81769448 1.29 1979.80330511 1.30 1979.78339939 1.31 1979.75736288 1.32 1979.72409901 1.33 1979.68352105 1.34 1979.63566920 1.35 1979.58069993 1.36 1979.51670517 1.37 1979.44233796 1.38 1979.35779846 1.39 1979.26468767 1.40 1979.16177621 1.41 1979.04976449 1.42 1978.92676326 1.43 1978.79213218 1.44 1978.64555037 1.45 1978.48837015 1.46 1978.32011092 1.47 1978.14017085 1.48 1977.95500440 1.49 1977.75460116 1.50 1977.53929775 1.51 1977.31056663 1.52 1977.06737359 1.53 1976.80606527 1.54 1976.52721955 1.55 1976.23286883 1.56 1975.92682410 1.57 1975.60574409 1.58 1975.26801315 1.59 1974.91469758 1.60 1974.54947451 1.61 1974.16923372 1.62 1973.77070376 1.63 1973.35355306 1.64 1972.91665268 1.65 1972.46280943 1.66 1971.99372187 1.67 1971.51115360 1.68 1971.01142423 1.69 1970.49285180 1.70 1969.95210065 1.71 1969.39103370 1.72 1968.81236301 1.73 1968.21808006 1.74 1967.60511707 1.75 1966.97077175 1.76 1966.31642577 1.77 1965.64473808 1.78 1964.95488565 1.79 1964.24169727 1.80 1963.50348643 1.81 1962.74968760 1.82 1961.97913235 1.83 1961.18709132 1.84 1960.37331514 1.85 1959.53686374 1.86 1958.67882118 1.87 1957.79696651 1.88 1956.88661558 1.89 1955.94582437 1.90 1954.97949954 1.91 1953.98399101 1.92 1952.95610229 1.93 1951.89377598 1.94 1950.79298520 1.95 1949.63969262 1.96 1948.42603486 1.97 1947.15347382 1.98 1945.80171537 1.99 1944.35916231 2.00 1942.81707081 2.01 1941.14446913 2.02 1939.30723411 2.03 1937.25876976 2.04 1934.93414618 2.05 1932.25762526 2.06 1929.13222293 2.07 1925.42169053 2.08 1920.96503220 2.09 1915.81720928
Why are the curves of the data samples connected by a line?
It's simply because you are plotting one line for one set of x,y data, not two lines; hence the one line has to join all points. Try p.scatter(deformation, potential, s = 2, color='b') and see your data as it should be shown. As these are data points this is more appropriate anyway.
76396037
76396113
User input scores of subjects and I need to check if these scores are valid (0->10, step is 0.1 because for exp: 5.25 or 5.1 is acceptable). Below is my code: def Task_22(): mathematics = float(input("Input mathematics score: ")) literature = float(input("Input literature score: ")) english = float(input("Input english score: ")) # check valid score if all(i in range(0,11) for i in [mathematics, literature, english]): print("OK") else: print("NOK") but when input as below, the result is not as my expected: Input mathematics score: 2 Input literature score: 5 Input english score: 7.25 NOK
Check in range of multiple float variables
The range() function lets you create an iterator so you can loop over some integers. When you use i in range(0, 11), you're essentially asking "will range(0, 11) eventually iterate over i", not "is i within the upper and lower bounds of range(0, 11)". Because range() only works with integers, a float will be iterator over by range, and thus a float will never be in a range. What you really want to do is check if the number is greater or equal to a lower bound and less than or equal to an upper bound, using operators like >= and <=. def Task_22(): mathematics = float(input("Input mathematics score: ")) literature = float(input("Input literature score: ")) english = float(input("Input english score: ")) # check valid score if all(0 <= i <= 10 for i in [mathematics, literature, english]): print("OK") else: print("NOK")
76396082
76396144
I'm trying to do a search by title and by content, but it gives an error Typeorm select p.*, from post p left join vote v on p.id = v.post_id and v.user_id = $1 where p.is_published = true AND p.title ilike OR p.context::text ilike $2 limit 5 offset $3 const posts = await AppDataSource.query(` select p.*, from post p left join vote v on p.id = v.post_id and v.user_id = $1 where p.is_published = true AND p.title ilike OR p.context::text ilike $2 limit 5 offset $3 `, [req.user.id, `%${req.query.q}%`, req.query.skip] )
I'm trying to do a search by title and by content, but it gives an error syntax error at or near \"OR\
There's a matching pattern missing: p.title ilike OR p.context::text ilike $2 ^ HERE
76390635
76397231
I am testing out scipy.interpolate.RectBivariateSpline for a project where I want to upscale some data to achieve better resolution. My attempt at using both scipy.interpolate.RectBivariateSpline and scipy.interpolate.interp2d results in no interpolation actually happening to the data; I just end up with a bigger matrix filled with more zeros. I have looked at some examples as well, but I am unable to see what I have done differently from them. And i would also expect my orgianal data to be centerd. Any help is appreciated code n = 10 smile = np.zeros((n,n)) a = 0.5 smile[2,2] = a smile[3,2] = a smile[2,7] = a smile[3,7] = a smile[6,2] = a smile[6,7] = a smile[7,3:7] = a plt.imshow(smile) plt.show() #RECTBIVARIATESPLINE #making interpolation function x = np.arange(n) y = x z = smile interpolation_funk = scipy.interpolate.RectBivariateSpline(x,y,z) #using interpolation x_new = np.arange(2*n) y_new = x_new Z_new = interpolation_funk(x_new,y_new) #plotting new funtion plt.imshow(Z_new)
resampling/"upscaling" using scipy.interpolate.RectBivariateSpline no diffrence
Note x_new = np.arange(2*n) : you are evaluating the interpolant outside of the original data range ([0, 9] x [0, 9]) and you get all zeros for the extrapolation. Use x_new = np.arange(2*n) / 2 or some such to actually interpolate between the data points.
76396174
76396198
I have a Service method to update a user from the database: const updateUser = async (user) => { const {firstName, lastName, email, password, phone, dob, countryid, gender, address, role, id} = user; const sql = `UPDATE user set user_firstName = ?, user_lastName = ?, user_email = ?, user_password = ?, user_phoneNumber = ?, user_dob= ?, user_countryId = ?, user_gender = ?, user_address = ?, user_role= ? WHERE client_id = ?`; try { await db.query(sql, [firstName, lastName, email, password, phone, dob, countryid, gender, address, role, id]); return { message: "records updated successfully." } } catch (error) { return { message: "Failed to updated" } } } whenever I log user alone it can read it, but it's not reading any other properties ( firstName etc...) and getting this error: Bind parameters must not contain undefined. To pass SQL NULL specify JS null. preview of console.log(user): my route is working perfectly, the same function has been used before without any issues
Why is my SQL query not updating user properties other than 'user' in Node.js and MySQL?
The user argument you're printing out isn't an object representing a user, it's an array with a single such element. In addition, the object itself doesn't have the firstName and lastName properties you're trying to use, it has firstname and lastname (notice the lowercase ns). Since they aren't there, they get undefined values Assuming you intended to pass an array to the function, you should probably iterate over all the users an update each of them. Regarding the properties, if you replace firstName and lastName with firstname and lastname, respectively, you should be OK: const updateUser = async (users) => { users.forEach(user => { const {firstname, lastname, email, password, phone, dob, countryid, gender, address, role, id} = user; // Here const sql = `UPDATE user set user_firstName = ?, user_lastName = ?, user_email = ?, user_password = ?, user_phoneNumber = ?, user_dob= ?, user_countryId = ?, user_gender = ?, user_address = ?, user_role= ? WHERE client_id = ?`; try { await db.query(sql, [firstname, lastname, email, password, phone, dob, countryid, gender, address, role, id]); // And here! return { message: "records updated successfully." } } catch (error) { return { message: "Failed to updated" } } } });
76394699
76394770
**I have a mongo document as below. ** { "metadata": { "docId": "7b96a" }, "items": { "content": "abcd", "contentWithInfo": "content with additional info" } } I want to project content field based on the condition whether contentWithInfo field is present or not. If contentWithInfo is present, it's value should be projected as content field value and contentWithInfo should be empty. Otherwise content should be projected as is. Is it possible? I tried the following shell query: db.collection1.aggregate([ { "$match": { "metadata.docId": { "$in": [ "7b96a" ] } } }, { "$unwind": "$items" }, { "$project": { "metadata": 1, "items.content": { "$cond": { "if": { "$eq": [ "$items.contentWithInfo", null ] }, "then": "$items.content", "else": "$items.contentWithInfo" } } } } ]) If contentWithInfo is present, it is returning the following: { "metadata": { "docId": "7b96a" }, "items": { "content": "content with additional info" } } If contentWithInfo is not present, it is returning the following: { "metadata": { "docId": "7b96a" }, "items": {} } whereas I expect it to return { "metadata": { "docId": "7b96a" }, "items": { "content": "abcd" } }
Conditionally project a field value in mongodb
Approach 1 Instead of checking items.contentWithInfo is null, check whether the items.contentWithInfo is missing with $type operator. db.collection.aggregate([ { "$match": { "metadata.docId": { "$in": [ "7b96a" ] } } }, { "$unwind": "$items" }, { "$project": { "metadata": 1, "items.content": { "$cond": { "if": { "$eq": [ { $type: "$items.contentWithInfo" }, "missing" ] }, "then": "$items.content", "else": "$items.contentWithInfo" } } } } ]) Demo Approach 1 @ Mongo Playground Another approach, if you want items.content as default value if items.contentWithInfo is missing or null, you can use $ifNull operator. db.collection.aggregate([ { "$match": { "metadata.docId": { "$in": [ "7b96a" ] } } }, { "$unwind": "$items" }, { "$project": { "metadata": 1, "items.content": { $ifNull: [ "$items.contentWithInfo", "$items.content" ] } } } ]) Demo Approach 2 @ Mongo Playground
76396065
76396319
I have this error when launching g4dn.xlarge instance You have requested more vCPU capacity than your current vCPU limit of 0 allows for the instance bucket that the specified instance type belongs to. Please visit http://aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit. However my Service Quota is like this below All G and VT Spot Instance Requestss is 8 And I don't launch another G instance at the same time. Where should I check? Or is it relevant?(EC2 -> Limit, It shows Limits page deactivated) (I can jump to the service quota page from the link in this page though)
"You have requested more vCPU capacity", when launching G instance
Ensure you are requesting a spot instance when you configure the launch of your vm. There is a check box under the Advanced section of the launch wizard.
76390618
76397387
I am building an Azure Devops Release pipeline and I have a WIX config file in my source code with the following structure <?xml version="1.0" encoding="utf-8"?> <Include> <?define Key ="MyValue"?> </Include> I would like to replace "MyValue" with "MyNewValue" using the Replace Tokens package (link here) Following the answer given here. I have <Include> as my token prefix and </Include> as my token suffix. I've added the following pipeline variable: enter image description here When I build my pipeline it finds the file correctly, but doesn't find any variables to replace, am I missing something to link the variables step to the pipeline variables? alternatively I was thinking the '<?' tags could be throwing it off or perhaps it's just not compatible with the WIX xml files?
How to perform XML Element substitution in config.wxi using Replace Tokens?
Why not just pass the preprocessor variable via the .wixproj or command-line (whichever way you are building). That way you avoid the include file completely and have a much simpler solution. I cover this technique in Episode 12 of the Deployment Dojo - All the Ways to Change. Variables and Variables. Directories and Properties: <Project Sdk="WixToolset.Sdk/4.0.0"> <PropertyGroup> <DefineConstants>Key=MyValue</DefineConstants> </PropertyGroup> </Project> Then you can easily override the value with msbuild -p:Key=MyNewValue.
76394516
76394776
I am attempting to build a function that processes data and subsets across two combinations of dimensions, grouping on a status label and sums on price creating a single row dataframe with the different combinations of subsets of the summed prices as output. edit to clarify, what I'm looking for is to subset on two different combinations of dimensions; a time delta and an association label. I'm then looking to group on a different status label (which is different from the association label) and sum those on price. Combinations of subsets: the association labels are in the "Association Label" column and the three of interest are ["SDAR", "NSDCAR", "PSAR"] there are others in the column/data but they can be ignored the time interval are [7, 30, 60, 90, 120, None] and are in the "Status Date" column What's being grouped and summed as per those combination of subsets: The Status Labelled are transaction statuses which are to be grouped on as per the different combinations of the above subsets from time deltas and association labels. They include ["Active","Pending","Sold",Withdrawn","Contingent","Unknown"] (this is not an exhaustive list but just an example) And finally ['List Price (H)'] which is to be summed per each of those status labelled and as per each combination of the fist two subsets. So example columns of desired output would be something like PSAR_7_Contingent_price or SDAR_60_Withdrawn_price This builds off of this question and answer which worked fantastic for value counts, but I'm having difficulty modifying it for summing on a price variable. The code I used to build off of is def crossubsets(df): labels = ["SDAR", "NSDCAR", "PSAR"] time_intervals = [7, 30, 60, 90, 120, None] group_dfs = df.loc[ df["Association Label"].isin(labels) ].groupby("Association Label") data = [] for l, g in group_dfs: for ti in time_intervals: s = ( g[g["Status Date"] > (pd.Timestamp.now() - pd.Timedelta(ti, "d"))] if ti is not None else g ) data.append(s["Status Labelled"].value_counts().rename(f"counts_{l}_{ti}")) return pd.concat(data, axis=1) #with optional .T to have 18 rows instead of cols # additional code to flatten the output to a (1, 180) dataframe counts_processeed = counts_processeed.unstack().to_frame().sort_index(level=1).T counts_processeed .columns = counts_processeed.columns.map('_'.join) This worked great for the value_counts per Status Labelled, but now I'm looking to sum the associated price per those that Status Labelled, and across those dimensions of subsets. I naively attempted to modify the above function with: def crossubsetsprice(df): labels = ["SDAR", "NSDCAR", "PSAR"] time_intervals = [7, 30, 60, 90, 120, None] group_dfs = df.loc[ df["Association Label"].isin(labels) ].groupby("Association Label") data = [] for l, g in group_dfs: for ti in time_intervals: s = ( g[g["Status Date"] > (pd.Timestamp.now() - pd.Timedelta(ti, "d"))] if ti is not None else g ) data.append(s['List Price (H)'].sum().rename(f"price_{l}_{ti}")) return pd.concat(data, axis=1) #with optional .T to have 18 rows instead of cols But that throws and error AttributeError: 'numpy.float64' object has no attribute 'rename' and I don't think makes much sense or would get the desired output anyway. The alternative I want to avoid, but I know would work, is creating 18 distinct functions for each of combination of subsets then concatinating the output. An example would be: def price_PSAR_90(df): subset_90 = df[df['Status Date'] > (datetime.now() - pd.to_timedelta("90day"))] subset_90_PSAR= subset_90[subset_90['Association Label']=="PSAR"] grouped_90_PSAR = subset_90_PSAR.groupby(['Status Labelled']) price_summed_90_PSAR = (pd.DataFrame(grouped_90_PSAR['List Price (H)'].sum())) price_summed_90_PSAR.columns = ['Price'] price_summed_90_PSAR = price_summed_90_PSAR.reset_index() price_summed_90_PSAR = price_summed_90_PSAR.T price_summed_90_PSAR = price_summed_90_PSAR.reset_index() price_summed_90_PSAR.drop(price_summed_90_PSAR.columns[[0]], axis=1, inplace=True) price_summed_90_PSAR_header = price_summed_90_PSAR.iloc[0] #grab the first row for the header price_summed_90_PSAR = price_summed_90_PSAR[1:] #take the data less the header row price_summed_90_PSAR.columns = price_summed_90_PSAR_header return price_summed_90_PSAR The last code snippet works, but without looping would need to be repeated with the time delta and association label being changed for each combination, and then relabelling the output columns and concatenated them together, which I want to avoid if possible.
creating a function looping through multiple subsets and then grouping and summing those combinations of subsets
Maybe you can try to use a dict for data instead of a list. Something like: def crossubsetsprice(df): labels = ["SDAR", "NSDCAR", "PSAR"] time_intervals = [7, 30, 60, 90, 120, None] group_dfs = df.loc[ df["Association Label"].isin(labels) ].groupby(["Association Label", 'Status Labelled']) data = {} # HERE for (l1, l2), g in group_dfs: for ti in time_intervals: s = ( g[g["Status Date"] > (pd.Timestamp.now() - pd.Timedelta(ti, "d"))] if ti is not None else g ) data[(l1, l2, ti)] = s['List Price (H)'].sum() # HERE names = ['Association Label', 'Status Labelled', 'Time Interval'] return pd.Series(data, name='Price').rename_axis(names) # HERE Output: >>> crossubsetsprice(df) Association Label Status Labelled Time Interval NSDCAR Active 7.0 1393 30.0 6090 60.0 11397 90.0 16540 120.0 21660 ... SDAR Withdrawn 30.0 3167 60.0 8897 90.0 15768 120.0 21806 NaN 28379 Name: Price, Length: 108, dtype: int64 Minimal Reproducible Example: import pandas as pd import numpy as np N = 1000 rng = np.random.default_rng(42) labels = rng.choice(["SDAR", "NSDCAR", "PSAR"], N) status = rng.choice(["Active", "Pending", "Sold", "Withdrawn", "Contingent", "Unknown"], N) today = pd.Timestamp.today() start = pd.Timestamp('2023-01-01 00:00:00') offsets = rng.integers(0, (today - start).total_seconds(), N) dates = start + pd.to_timedelta(offsets, unit='S') prices = rng.integers(1, 1001, N) df = pd.DataFrame({'Association Label': labels, 'Status Date': dates, 'Status Labelled': status, 'List Price (H)': prices})
76397512
76397558
I'm running ggplot2 v3.4.1. I created this 2 legend plot that by default it is placing the year2 size legend below the cty color legend. However, I would like the size legend to be on top. library(tidyverse) mpg$year2 = factor(mpg$year) values = c(2,4); names(values) = c("1999", "2008") p = mpg %>% ggplot(aes(x = cty, y = hwy, color = cty, size = year2)) + geom_point() + scale_size_manual(name = "year2", values = values) p Therefore, I used guides() to specify the legend ordering but it changes the continuous color legend cty to discrete p + guides(size = guide_legend(order = 1), color = guide_legend(order = 2)) I saw this post ggplot guide_legend argument changes continuous legend to discrete but am unable to figure out how to use guide_colorbar() when you have 2 or more legends. How do I change my code to keep the cty legend as continuous? Thx
ggplot - ordering legends with guides changes continuous legend to discrete
It's simply p + guides(size = guide_legend(order = 1), color = guide_colorbar(order = 2))
76396169
76396390
Is there a more efficient method to convert an integer, in the range 0..255 (in C uint8), to one byte? x = 100 x.to_bytes(1, "big")
python3, converting integer to bytes: which are the alternatives to using to_bytes() for small integers?
More efficient method for a single integer? Probably not. Here's a comparison of the common ones: python -m timeit -s "import struct" "struct.pack('<B', 100)" 2000000 loops, best of 5: 101 nsec per loop python -m timeit "(100).to_bytes(1)" 5000000 loops, best of 5: 81.1 nsec per loop python -m timeit "bytes([100])" 2000000 loops, best of 5: 199 nsec per loop If you're talking about multiple, bytes([integers]) will probably be the most efficient. For some premature optimization you can cache the function, which will net you a few nanoseconds: python -m timeit -s "tb = int.to_bytes" "tb(100,1)" 5000000 loops, best of 5: 78.6 nsec per loop And if you want the most efficient possible, it would probably be using a tuple: python -m timeit -s "b=tuple(map(int.to_bytes, range(256)))" "b[100]" 10000000 loops, best of 5: 20.9 nsec per loop But personally I find it disgusting.
76394463
76394795
Lets t be the time tick i.e. 1,2,3,4,5.... I want to calculate and plot a cumulative decaying function f(inits[],peaks[],peak-ticks,zero-ticks). Preferably in python Where : - inits[] is a list of points at time/tick t where a new 'signal' is introduced - peaks[] is a list of values which must be reached after peak-ticks. (corresponding to inits) - peak-ticks is how many ticks it takes to reach the next peak value - zero-ticks is how many ticks it takes to reach zero from the peak For example : f(inits=[10,15,18], peaks=[1,1,1], peak-ticks=1, zero-ticks=10) in this case decay takes 10 ticks i.e. 0.1 per tick. at tick: 10! result is 0 11. = 1 12. = 0.9 ..... 15! = 0.6 + 0 = 0.6 16. = 0.5 + 1 = 1.5 17. = 0.4 + 0.9 = 1.3 18! = 0.3 + 0.8 + 0 = 1.1 19. = 0.2 + 0.7 + 1 = 1.9 20. = 0.1 + 0.6 + 0.9 = 1.6 ..... PS> As a complication, what if the decay is exponential like 1/x ?
Simulate decaying function
For the base case you mentioned, it is actually pretty simple, you just need to define a triangular function that returns the contribution of a specific singal at the current tick t. Then, just sum the contribution of all signals at tick t, that is your answer. In the code below, I implemented the decaying function as an infinite generator, so I have to use islice to define how many ticks to compute (or maybe the start and end ticks). You could also implement it as a normal function, you'd just have to pass in the start and end ticks. from itertools import count, islice def fdecay(inits, peaks, ptks, ztks): for t in count(): yield sum(triang(p, i, i+ptks, i+ptks+ztks, t) for i, p in zip(inits, peaks)) def triang(ymax, xa, xb, xc, x): if x < xa: return 0 if x < xb: return ymax * (x-xa) / (xb-xa) if x < xc: return ymax * (xc-x) / (xc-xb) return 0 x = list(islice(fdecay([10,15,18], [1,1,1], 1, 10), 30)) print(x) # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0.9, 0.8, 0.7, 0.6, 1.5, 1.3, 1.1, 1.9, 1.6, 1.3, 1.1, 0.9, 0.7, 0.5, 0.3, 0.2, 0.1, 0] If you want exponential decay, just switch the triangular function to the exponential function at time t (with the appropriate params).
76396125
76396459
I have Rails 7 project where I've got tables Pipelines class Pipeline < ApplicationRecord has_many :states, inverse_of: :pipeline, dependent: :destroy end States class State < ApplicationRecord belongs_to :pipeline, inverse_of: :states has_many :items, inverse_of: :state end Items class Item < ApplicationRecord belongs_to :state, inverse_of: :items end I wrote a query to limit only specific items on state @pipeline = Pipeline.joins(states: :items) .where(states: { items: { name: "Lorem" } }) .find_by(id: '123') query: Pipeline Load (0.7ms) SELECT "pipelines".* FROM "pipelines" INNER JOIN "states" ON "states"."pipeline_id" = "pipelines"."id" INNER JOIN "items" ON "items"."state_id" = "states"."id" WHERE "items"."name" = $1 AND "pipelines"."id" = $2 LIMIT $3 [["name", "Lotem"], ["id", "123"], ["LIMIT", 1]] but once I call states from pipeline it generates new query without limitation: puts @pipeline.states.inspect State Load (0.4ms) SELECT "states".* FROM "states" WHERE "states"."pipeline_id" = $1 /* loading for inspect */ LIMIT $2 [["pipeline_id", "123"], ["LIMIT", 11]] I'm expecting that it should grab states from collection it already has loaded, am I missing something ?
Rails 7 - Accessing joined model generate new query
but once I call states from pipeline it generates new query without limitation yeah, that's how joins work, it doesn't do more than that so the docs even don't bother adding more information. If you want to access the states within a single query (really it'll end up being 2 queries) you could use includes instead; ActiveRecord::QueryMethods#includes; Specify relationships to be included in the result set. It allows you to access the address attribute of the User model without firing an additional query. and your query would slightly change to; Pipeline .includes(states: :items) .where(items: { name: "Lorem" }) .find('123') The fact that invoking states on a Pipeline instance makes a new query is because you're invoking an instance method on that object which by definition makes a new query. Active Record won't know that you're referring to something that was already created in the query unless is specified otherwise (by using a custom select clause or similar). Notice Pipeline.joins(states: :items).where(items: { name: "Lorem" }) is the same as Pipeline.joins(states: :items).where(states: { items: { name: "Lorem" } }), you don't need to nest items under states.
76397544
76397591
I wanted to know if when you inject your DbContext with DI in a class, within the methods calling that context, should you use the using scope as well ? Or does DI knows about disposal after the method has been executed and/or does transient by default and it's safe ?
When injecting the DbContext using DI and using its private field in a query, should you still use the "using" scope?
When you inject a DbContext using dependency injection (DI), you generally don't need to use the using scope explicitly within the methods calling that context. In the case of a transient registration for your DbContext, a new instance of the context will be created for each method call, and the DI container will automatically dispose of it once the method execution is complete. Therefore, you don't need to manually dispose of the context using the using statement. Microsoft Docs
76395882
76396461
I am making a web service with sveltekit and firebase. By the way, when users save images on firebase storage and other users try to use these images, I want to create a signed url and show it on the page to prevent Hotlink. I searched and found that there is a function called getSignedUrl that generates a signed url, but there is no official document page that describes it in the firebase document. Where can I get some example functions or information related to this?
How can I generate signed URLs for accessing Firebase Storage images?
The Firebase SDK for Cloud Storage uses a different type of URL, called a download URL. You can generate a download URL by calling getDownloadURL with/on a reference to the file, as shown in the documentation on downloading data through a URL.
76396176
76396467
<image-cropper [imageChangedEvent]="imageChangedEvent" [maintainAspectRatio]="true" [aspectRatio]="4 / 4" format="jpg" (imageCropped)="imageCropped($event)" roundCropper = "true"> </image-cropper> [screenshot attached for your reference] I used roundCropper = "True". But its not working and throwing the error: Type 'string' is not assignable to type 'boolean'. If a try to run the same code on stackblitz then its working. I have also tried it with roundCropper = true , but its giving same error. I want to use round cropper in my ngx-image-cropper.
ngx-image-cropper , roundCropper = "true" not working , its showing this error - Type 'string' is not assignable to type 'boolean'
Try to surround roundCropper with [] [roundCropper] = "true"
76387496
76394801
I have error in the fifth line when asp.net try to connect db. 2023-06-02 09:36:59 warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60] 2023-06-02 09:36:59 Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed. 2023-06-02 09:37:00 info: Microsoft.EntityFrameworkCore.Infrastructure[10403] 2023-06-02 09:37:00 Entity Framework Core 6.0.8 initialized 'AppIdentityDbContext' using provider 'Pomelo.EntityFrameworkCore.MySql:6.0.2' with options: ServerVersion 0.0-mysql 2023-06-02 09:37:00 fail: Microsoft.EntityFrameworkCore.Database.Connection[20004] 2023-06-02 09:37:00 An error occurred using the connection to database '' on server 'db2'. 2023-06-02 09:37:00 Unhandled exception. System.InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseMySql' call. 2023-06-02 09:37:00 ---> MySqlConnector.MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts. 2023-06-02 09:37:00 at MySqlConnector.Core.ServerSession.ConnectAsync(ConnectionSettings cs, MySqlConnection connection, Int32 startTickCount, ILoadBalancer loadBalancer, IOBehavior ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/Core/ServerSession.cs:line 433 2023-06-02 09:37:00 at MySqlConnector.MySqlConnection.CreateSessionAsync(ConnectionPool pool, Int32 startTickCount, Nullable`1 ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlConnection.cs:line 926 2023-06-02 09:37:00 at MySqlConnector.MySqlConnection.OpenAsync(Nullable`1 ioBehavior, CancellationToken cancellationToken) in /_/src/MySqlConnector/MySqlConnection.cs:line 406 2023-06-02 09:37:00 at MySqlConnector.MySqlConnection.Open() in /_/src/MySqlConnector/MySqlConnection.cs:line 369 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.OpenDbConnection(Boolean errorsExpected) 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.OpenInternal(Boolean errorsExpected) 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.Open(Boolean errorsExpected) 2023-06-02 09:37:00 at Pomelo.EntityFrameworkCore.MySql.Storage.Internal.MySqlRelationalConnection.Open(Boolean errorsExpected) 2023-06-02 09:37:00 at Pomelo.EntityFrameworkCore.MySql.Storage.Internal.MySqlDatabaseCreator.<>c__DisplayClass18_0.<Exists>b__0(DateTime giveUp) 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.<>c__DisplayClass12_0`2.<Execute>b__0(DbContext c, TState s) 2023-06-02 09:37:00 at Pomelo.EntityFrameworkCore.MySql.Storage.Internal.MySqlExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded) 2023-06-02 09:37:00 --- End of inner exception stack trace --- 2023-06-02 09:37:00 at Pomelo.EntityFrameworkCore.MySql.Storage.Internal.MySqlExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded) 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.Execute[TState,TResult](IExecutionStrategy strategy, TState state, Func`2 operation, Func`2 verifySucceeded) 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.ExecutionStrategyExtensions.Execute[TState,TResult](IExecutionStrategy strategy, TState state, Func`2 operation) 2023-06-02 09:37:00 at Pomelo.EntityFrameworkCore.MySql.Storage.Internal.MySqlDatabaseCreator.Exists(Boolean retryOnNotExists) 2023-06-02 09:37:00 at Pomelo.EntityFrameworkCore.MySql.Storage.Internal.MySqlDatabaseCreator.Exists() 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.Migrations.HistoryRepository.Exists() 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.Migrations.HistoryRepository.GetAppliedMigrations() 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.GetAppliedMigrations(DatabaseFacade databaseFacade) 2023-06-02 09:37:00 at Microsoft.EntityFrameworkCore.RelationalDatabaseFacadeExtensions.GetPendingMigrations(DatabaseFacade databaseFacade) 2023-06-02 09:37:00 at ClothingShop.Models.IdentitySeedData.EnsurePopulated(IApplicationBuilder app) in /src/Models/IdentitySeedData.cs:line 16 2023-06-02 09:37:00 at System.Threading.Tasks.Task.<>c.<ThrowAsync>b__128_1(Object state) 2023-06-02 09:37:00 at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute() 2023-06-02 09:37:00 at System.Threading.ThreadPoolWorkQueue.Dispatch() 2023-06-02 09:37:00 at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart() 2023-06-02 09:37:00 at System.Threading.Thread.StartCallback() I have an asp.net core project and two databases. I am creating a docker compose file. version: '3.8' services: web: build: . ports: - "8000:80" depends_on: - db1 - db2 db1: image: mysql:8.0.30 environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: clothingshop1 volumes: - ./db1:/docker-entrypoint-initdb.d ports: - "3307:3306" db2: image: mysql:8.0.30 environment: MYSQL_ROOT_PASSWORD: root MYSQL_DATABASE: identity volumes: - ./db2:/docker-entrypoint-initdb.d ports: - "3308:3306" I am using generated file docker #See https://aka.ms/customizecontainer to learn how to customize your debug container and how Visual Studio uses this Dockerfile to build your images for faster debugging. FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base WORKDIR /app EXPOSE 80 EXPOSE 443 FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build WORKDIR /src COPY ["ClothingShop.csproj", "."] RUN dotnet restore "./ClothingShop.csproj" COPY . . WORKDIR "/src/." RUN dotnet build "ClothingShop.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "ClothingShop.csproj" -c Release -o /app/publish /p:UseAppHost=false FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "ClothingShop.dll"] In programm.cs i have: builder.Services.AddDbContext<StoreDbContext>(opts => { opts.UseMySql(builder.Configuration.GetConnectionString("ClothingShopConnection"), new MySqlServerVersion(new Version()), b => b.EnableRetryOnFailure()); }); and builder.Services.AddDbContext<AppIdentityDbContext>(options => options.UseMySql(builder.Configuration.GetConnectionString("IdentityConnection"), new MySqlServerVersion(new Version()))); In appsettings.json i have: "AllowedHosts": "*", "ConnectionStrings": { "ClothingShopConnection": "server=db1;port=3307;user=root;password=root;Pooling=true;Max Pool Size=200;database=clothingshop1", "IdentityConnection": "server=db2;port=3308;user=root;password=root;database=identity" }, I run docker-compose up -d in cmd. And i have this result Created container but asp won't start. I try edit appsettings.json to: "ClothingShopConnection": "server=localhost;port=3306;user=root;password=root;Pooling=true;Max Pool Size=200;database=clothingshop1", "IdentityConnection": "server=localhost;port=3306;user=root;password=root;database=identity" "ClothingShopConnection": "server=localhost;port=3307;user=root;password=root;Pooling=true;Max Pool Size=200;database=clothingshop1", "IdentityConnection": "server=localhost;port=3308;user=root;password=root;database=identity" "ClothingShopConnection": "server=db-1;port=3307;user=root;password=root;Pooling=true;Max Pool Size=200;database=clothingshop1", "IdentityConnection": "server=db-2;port=3308;user=root;password=root;database=identity" but to no avail. An interesting fact is that it mysql worckbanch connector sees these databases and makes it possible to connect to them through localhost: 3307 and 08, respectively.Successful connection. Db created correct. I've looked at similar threads but haven't found an answer.
ASP.Net Core can't connect to MySql database in docker
I solved my problem by configuring the connection string like this: "ConnectionStrings": { "ClothingShopConnection": "server=dbClothingShop;port=3306;user=root;password=root;Pooling=true;Max Pool Size=200;database=clothingshop1", "IdentityConnection": "server=dbIdentity;port=3306;user=root;password=root;database=identity" } And apparently part of the problem was due to incorrect reading of tables from the database. Tables with capital letters were expected, but in the table they are with small letters (apparently something broke when importing and exporting the database). I solved it with attributes DataAnnotation: [Table("size")] public class Size { //SizeID... } Thanks a lot for the great questions @QiangFu
76396262
76396484
I am creating a bar graph using plotly express inside dash application. The graph is getting displayed but I am having an issue with height.Currently I am using default height and width. Now for eg: dataframe having field column contain 3 entires, the graph looks ok. dataframe having field column contain 10 entires, the bar width is reduced auto and height remains the same and graph looks congested and hard to read. figure = ( px.bar( data_frame=dataframe, x="size", y="field", title="Memory Usage", text="size", # width=400, # height=400, orientation="h", labels={"size": "size in byte(s)"}, template=template, ).update_traces(width=0.4) .update_layout(autosize=True) ) dcc.Graph(id="memory_bar", figure=figure, className="dbc") Is it possible depending on number of entires, the height can be auto-resized? Also I am using orient as horrizontal. I tried autosize=true but got no effect on height it remains same.
Plotly: auto resize height
It is possible to define a dynamic width and height based on the dataframe: The number of categories can be found with dataframe['field'].nunique() (assuming you are using pandas). It will impact the height of the figure (since the bar chart is horizontal) The number of entries can be found with dataframe.shape[0] and will impact the width of the figure. You could be more precise if you use dataframe.groupby("field").count()["size"].max() instead. It returns the maximum entry per category. Then, we can define two methods for computing height and width of the figure: def num_fields_based_height(num_fields: int) -> int: padding = 150 # arbitrary value depending on legends row_size = 100 # arbitrary value return padding + row_size * num_fields def num_entries_based_width(num_entries: int) -> int: padding = 150 # arbitrary value depending on legends entry_size = 100 # arbitrary value return padding + entry_size * num_entries Then call this methods when declaring the figure: figure = ( px.bar( data_frame=dataframe, x="size", y="field", title="Memory Usage", text="size", width=num_entries_based_width(dataframe.shape[0]), height=num_fields_based_height(dataframe['field'].nunique()), orientation="h", labels={"size": "size in byte(s)"}, ) ) Now you need to find the right parameters (padding, entry_size, row_size) for your scenario.
76397539
76397621
I am trying to render a polygon using the GL_TRIANGLES mode for pyglet.graphics.draw but have been running into issues. I have been attempting to render it like I've seen people do in many other places def draw(self): pyglet.graphics.draw( size=int(len(self.coords) / 2), mode=pyglet.gl.GL_TRIANGLES, position=('v2f', self.coords), ) but have been running into the following error: File "C:\Python311\Lib\site-packages\pyglet\graphics\__init__.py", line 52, in draw gl_type = vertexdomain._gl_types[fmt[0]] ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^ KeyError: 'v' Did the usage change? Not sure exactly what I'm doing wrong here.
Rendering a Polygon in Pyglet using GL_TRIANGLES - KeyError 'v' Error
v2f is the old format specification. In Pyglet Version 2.0, this has changed. See the documentation of draw(size, mode, **data). The first parameter is the number of vertices and each vertex has to have 3 components. The format is just f for float. e.g.: self.coords = [x0, y0, z0, x1, y1, z1, ...] pyglet.graphics.draw( size = len(self.coords) // 3, mode = pyglet.gl.GL_TRIANGLES, position = ('f', self.coords) ) Note that this will draw only black triangles. Set the colors attribute to draw a colorful shape. Minimal example: import pyglet window = pyglet.window.Window(800, 600) vertices = [100, 100, 0, 300, 100, 0, 200, 300, 0] colors = [255, 0, 0, 255, 0, 255, 0, 255, 0, 0, 255, 255] @window.event def on_draw(): window.clear() pyglet.graphics.draw( size = 3, mode = pyglet.gl.GL_TRIANGLES, position = ('f', vertices), colors = ('Bn', colors)) pyglet.app.run()
76394514
76394813
Everything here works just fine but I haven't found a way to make it so that when I press the start button it withdraws the Main Menu screen and displays the next one. It already creates the second screen when I run the code and it withdraws the MainMenu Screen before I even click the start button. https://replit.com/@AbdullahKaran/ConfusedSophisticatedFolders#main.py
Switching between Tkinter screens with withdraw() method on button click
work on how you ask questions, as this can lead to your account being blocked. I've seen this happen before, so it's good advice. Now, you can do this by using the tkraise() method of the Frame widget. To learn more, please visit this website: https://www.pythontutorial.net/tkinter/tkraise/
76396417
76396495
In the following example: template<typename T> struct MyList { typedef std::list<T> type1; }; template<typename T> class MyList2 { typename MyList<T>::type1 type2; }; I thought that both type1 and type2 are dependent name since both of their types depend on the template parameter. However, why is the first one considered non-dependent since I can use typedef with it?
Dependent names vs non-dependent names
A dependent name is, in C++ terms, a name whose grammatical properties are dependent on a template parameter. Names can mean a lot of different things in C++, and the grammatical location where a name can be used depends on certain properties of that name. In order to understand what X x; is trying to do, the compiler needs to know what X is at this point in the program. If X names a type, then this is declaring a variable named x. The compiler doesn't need to know everything about X, but grammatically, in order to even begin to make sense of this text, the compiler does need to know that X here names a type. std::list is the name of a class template; the compiler can easily see that because that's how it is declared. It doesn't need to have the body of the declaration; template<typename T> class list; is sufficient. std::list<T> is an instantiation of a class template, so just from the declaration, the compiler knows that this is the name of a type. However, MyList<T>::type1 now requires more than just the declaration of the MyList template. It requires actually instantiating that template. And if T is itself a template parameter of the current code (as is the case for MyList2), then instantiation at the point of initially compiling the code is impossible. That instantiation must be delayed until MyList2 is itself given a concrete T to work with. But the compiler still has to make sense of this code: typename MyList<T>::type1 type2;. And to do that, it needs some idea of what MyList<T>::type1 actually is. Without instantiating MyList<T>. Which is why you have to tell it that it is a typename.
76397594
76397628
I am currently working on a c++ project that interacts with the JVM, more specifically I have a block of code that will run before some Java function ( lets call it funcABC ) runs, within this block of code I am able to read/write the registers and stack of the JVM. I am able to get a JavaThread* ptr out of a register, and so far I have successfully been able to get important data from that JavaThread instance such as the JNIEnv and the thread state. This is done by reconstructing the JavaThread structure in ReClass.NET, allowing me to access the variables stored within the structure. I would like to also get the parameters that are being passed to funcABC, I am told that they are stored somewhere within the JavaThread structure, so far I have not been able to find them, I dont see anything within the jdk sources that would suggest where they might be. Does anyone know how and where they are stored in the JavaThread? As an alternative I tried getting the parameters directly from the stack, only to find that there is no consistent layout, the parameters will be in a random order, sometimes on the stack and sometimes within registers like rdi, r9, and r8, or both, their positions/order also change during runtime. By printing all the registers and a large chunk of the stack I was able to find them, they are all there, but using them in this current state is beyond impractical unless there is some sort of field somewhere that specifics where everything is located in that particular call, but I cant find anything like that.
Get parameters from JavaThread* before a call takes place
In the JVM, the method parameters are typically passed on the stack or stored in registers, depending on the platform's calling convention. However, the exact layout and location of these parameters can be complex and implementation-dependent. As you have observed, the layout may change during runtime, making it challenging to reliably extract the parameters directly from the stack or registers. To access the parameters passed to funcABC, you may need to dig deeper into the JVM implementation and examine the bytecode interpretation and method invocation process. The JVM uses a bytecode instruction set, and the parameters are usually pushed onto the stack before the method is invoked. The JVM then retrieves the parameters from the stack and places them in appropriate locations, such as registers or stack frames, for the method to access. Since you are already working with the JavaThread structure and have access to the JNIEnv and thread state, you might consider examining the thread's stack frames. The stack frames store local variables, including method parameters, and their layout can be platform-specific. You may need to navigate through the stack frames to locate the specific frame corresponding to funcABC and extract the parameters from there. However, keep in mind that accessing stack frames directly can be challenging and error-prone. Another option is to leverage JVM debugging interfaces, such as the Java Debug Interface (JDI), which provides a higher-level abstraction for inspecting and manipulating JVM internals. The JDI allows you to programmatically interact with the JVM, inspect variables, and retrieve method arguments using a more convenient API. I would recommend consulting JVM-specific documentation, such as the OpenJDK source code or documentation for the particular JVM implementation you are working with. Additionally, exploring JVM debugging and profiling tools might provide insights into how to access method parameters at runtime. Keep in mind that manipulating JVM internals at such a low level can be fragile and might lead to unpredictable behavior. Exercise caution and thoroughly test your code to ensure it works reliably across different JVM versions and configurations.
76394498
76394825
I am using Angular 14 and making a post request to my json server using the pattern like this: myApiCall(data:any): Observable<any> { return this.http.post<any>(url, data).pipe(catchError(this.handleError)); } ( as specified in https://angular.io/guide/http) And then wanted to add 401 (unauthorized) handling to handleError so I added this block if (error.status===401){//UnAthorized this.router.navigate(['login']); } to my handleError like so: private handleError(error: HttpErrorResponse) { if (error.status===401){//UnAthorized this.router.navigate(['login']); } ... return throwError( () => new Error('Something bad happened; please try again later.') ); } I can see in debugger that this.router.navigate(['login']); is hit, but it does not navigate to the login screen. same code does work in another place. The error in chrome console is : ERROR TypeError: Cannot read properties of (has file and line see below for details) undefined (reading 'router') at handleError (line where this.router.navigate(['login']); is) at catchError.js:10:39 at OperatorSubscriber._error (OperatorSubscriber.js:23:21) ... The top in call stack in the error is the .subscribe after myApiCall... So what happens to handleError when I call this.router.navigate(['login']); Does it cause a return? And if so what gets returned ? UPDATE After Igor's suggestion of adding return of([]) AND declaring handleError with ()=>{} syntax instead of normal, it worked. I wonder why the second part is needed. I still suspect it has something to do with how pipe works. This works: private handleError = (error: HttpErrorResponse) => { if (error.status===401)//UnAthorized { this.router.navigate(['login']); return of([]); } ... return throwError( () => new Error('Something bad happened; please try again later.') ); } This does not private handleError(error: HttpErrorResponse){ if (error.status===401)//UnAthorized { this.router.navigate(['login']); return of([]); } ... return throwError( () => new Error('Something bad happened; please try again later.') ); } After this.router.navigate instead of hitting return, control is transferred to the caller that results in an error that mentions router. Why? SECOND UPDATE: Actually the first part (return of...) is not necessary to make it work, only the second (making it a lambda expression) is enough.
handling 401 in angular, how does pipe work really?
Try this: myApiCall(data:any): Observable<any> { return this.http.post<any>(url, data).pipe( catchError((err) => if (err.status===401){ // Redirect if unhautorized this.router.navigate(['login']); } ... ); } Note the lambda syntax of the error handler declaration. It is required in order to pass in the context (this). Without it the 'router' is not known hence the error. More details here
76396165
76396510
How to create a gridline of 7*7 sqkm using Latitude and Longitude values. These values should be the centroid value of a single square in the grid. I am not sure if I am doing it in the right way. I tried st_make_grid from sf (Simple Features) library but that shows me an empty plot. MyGrid <- st_make_grid(DF, cellsize = c(0.07, 0.07), square = TRUE, crs = 4326) Below is my example DF DF <- structure(list(lat = c(43.25724, 43.25724, 43.25724, 43.25616, 43.25616, 43.25616), lon = c(-96.01955, -95.98172, -95.92336, -96.40973, -96.25733, -96.17735)), class = "data.frame", row.names = c(NA, 6L)) ## > DF ## lat lon ## 1 43.25724 -96.01955 ## 2 43.25724 -95.98172 ## 3 43.25724 -95.92336 ## 4 43.25616 -96.40973 ## 5 43.25616 -96.25733 ## 6 43.25616 -96.17735 Thanks
How to create a spatial gridlines using Latitude and Longitude in R
from the documentation of st_make_grid: Create a square or hexagonal grid covering the bounding box of the geometry of an sf or sfc object so you need to convert your dataframe of point coordinates to an sf-object "the_points" (and reproject to a projection accepting metric length units): library(sf) the_points <- st_sf(geometry = DF[c('lon', 'lat')] |> as.matrix() |> st_multipoint() |> st_sfc() |> st_cast('POINT'), crs = 4326 ## geographic data (in degrees) ) |> ## convert to projected coordinates (to specify dimensions in m ## take Google Mercator as first guess (EPSG-code 3857) st_transform(3857) create grid (note that your points have only about 100 m latitudinal range): the_grid <- st_make_grid(n = c(10, 1), cellsize = 7e3 ## 7000 km) inspect result: plot(the_grid) plot(the_points, add = TRUE)
76397550
76397649
basically I'm creating a flashcards kind of application, where you can either go through the flashcards (which is an array) or you can edit them. in the editing phase, 2 input boxes get rendered with a button next to them where you can edit the text or you can delete that entire index. the problem lies with deletion, once the delete button is clicked, in the the correct index is deleted in the array which I verified using console logs, but in the UI the bottom index gets removed. so like lets say I have an array with 4 indexes, and i delete the 2nd, the UI will display 1 2 3, but the array it self will have 1 3 4. not really sure how to fix it but here is my code: const [flashcardsState, setFlashcardsState] = useState(flashcards); const handleDeleteClick = (index: number) => { const updatedFlashcards = [...flashcardsState]; updatedFlashcards.splice(index, 1); setFlashcardsState(updatedFlashcards); } const handleCardChange = (cardIndex: number, fieldIndex: number, newValue: string) => { const updatedFlashcards = [...flashcardsState]; updatedFlashcards[cardIndex][fieldIndex] = newValue; setFlashcardsState(updatedFlashcards); }; const renderEdit = () =>{ return ( <div className="card-container"> <div className="inputcardstext"> <h4>Front</h4> <h4>Back</h4> </div> {flashcardsState.map((flashcardsState, index) => ( <div key={index}> <div className="inputcards"> <input type="text" className='front-back' defaultValue={flashcardsState[0]} onChange={(e) => handleCardChange(index, 0, e.target.value)} /> <input type="text" className='front-back' defaultValue={flashcardsState[1]} onChange={(e) => handleCardChange(index, 1, e.target.value)} /> <button onClick={() => handleDeleteClick(index)}></button> </div> </div> ))} </div> ); }
incorrect array index being rendered using react tsx upon deletion of an index
It's deleting the right element, your issue is that you're using the array index as they key in React (<div key={index}>), which is a no-no. You're seeing this issue because of a combination of the array key as an index and defaultValue. TL;DR chose a different key, like const flashcards = [ ["key", "a", "1"], ["key", "b", "2"], ["key", "c", "3"] ]; const [flashcardsState, setFlashcardsState] = useState<string[][]>( flashcards ); //... {flashcardsState.map((flashcardsState, index) => ( <div key={flashcardsState[2]}> ... </div> ))} See this example on Codepen. Or, you can see your removal is working correctly if you change defaultValue to value (I explain why below). You should also install ESLint which will warn you against using an array key as an index. What's happening is you've told React each element's unique identifier is the array index. The first element in your array has index 0. React says "ok, I'll render element with id 0, and I'll remember what I rendered, to compare it later." Then you remove element at index 0, but your array still has an element at index 0. So you give your new array back to React. React then says: "Ok I have the same element here with id 0, so I only need to change it if the new render of id 0 is different from what I already have React re-renders element with "id" 0, and even though it's a different element in the array, the render output is identical. This is because defaultValue is only set on first render, not on subsequent ones. React says "ok, nothing in id 0 has changed, I'll move on and not update the DOM React gets to "id" of 2, the last element in the array, and sees "ah, there is no longer an id of 2, I shall remove the last component in the list for you.
76395825
76396539
I have several matrices and I would like to apply something like class(matrix) <- "numeric" to all of them at once, i.e. the class of all matrices should be changed to numeric. Do you know how to do this? dput(matrix[1:3,]) results in structure(c(285.789361223578, 282.564165145159, 273.633228540421, 256.789452806115, 260.808130130172, 241.718192100525, 266.765343174338, 267.881099879742, 250.710165724158, 284.365977942944, 281.670583188534, 268.735618144274, 264.118778035045, 262.856532484293, 254.31867428124, 286.250801086426, 284.585711210966, 268.984649181366, 286.17267370224, 284.429456442595, 267.478255555034, 275.10055847466, 274.141056537628, 259.477523118258, 246.454664766788, 252.470473349094, 232.699362188578, 284.998321458697, 283.73363442719, 269.555955678225, 0, 0, 0), dim = c(3L, 11L), dimnames = list(NULL, c("", "", "", "", "", "", "", "", "", "", "vec")))
Efficient way to change the class of several matrices in R
In this examples all matrix variables of the current environment are converted to numeric. See the warning in the case where matrix cannot be converted to numeric. var1 <- matrix(1:10, 5, 2) var2 <- matrix(as.character(5:13), 3,3) var3 <- letters[1:5] var4 <- matrix(letters[1]) print(sapply(mget(ls()), typeof)) #> var1 var2 var3 var4 #> "integer" "character" "character" "character" for (i in ls()[sapply(mget(ls()), is.matrix)]) assign(i, as.numeric(get(i))) #> Warning in assign(i, as.numeric(get(i))): NAs introduced by coercion print(sapply(mget(ls()), typeof)) #> i var1 var2 var3 var4 #> "character" "double" "double" "character" "double" Created on 2023-06-03 with reprex v2.0.2