QuestionId
stringlengths
8
8
AnswerId
stringlengths
8
8
QuestionBody
stringlengths
91
22.3k
QuestionTitle
stringlengths
17
149
AnswerBody
stringlengths
48
20.9k
76387405
76387433
Let's imagine a situation where we have a queue positioned before the seize block, and this queue has the timeout property activated. In such cases, it is important to note that the seizure queue cannot be reduced to zero. The seize queue always has a minimum capacity of 1. If the seize queue capacity were set to 1, one agent would be taken out of the timeout property of the original queue and placed directly into the seize queue. This behavior is generally not desirable, particularly when dealing with a large number of agents or running the simulation for extended periods, such as years, to collect the statistics, especially for the timeout property of the original queue. So how can I avoid having an agent in the seize queue, or is there another way to achieve this?
Why seize queue cant be zero in AnyLogic
You can manually avoid having agents in the Seize queue by using a Hold block upstream and only releasing agents into the Seize block if its queue is empty.
76382756
76385488
I'm looking to find an efficient method of matching all values of vector x in vector y rather than just the first position, as is returned by match(). What I'm after essentially is the default behavior of pmatch() but without partial matching: x <- c(3L, 1L, 2L, 3L, 3L, 2L) y <- c(3L, 3L, 3L, 3L, 1L, 3L) Expected output: pmatch(x, y) [1] 1 5 NA 2 3 NA One way is to use ave() however this becomes slow and very memory inefficient as the number of groups increases: ave(x, x, FUN = \(v) which(y == v[1])[1:length(v)]) [1] 1 5 NA 2 3 NA Can anyone recommend an efficient way to achieve this in preferably (but not mandatory) base R? Larger dataset for benchmarking: set.seed(5) x <- sample(5e3, 1e5, replace = TRUE) y <- sample(x, replace = TRUE)
Efficiently match all values of a vector in another vector
A variant in base using split. split the indices of both vectors by its value. Subset the second list with the names of the first, that both have the same order. Change NULL to NA and bring the lengths of the second list to those from the first. Reorder the indices of the second list by those of the first. x <- c(3L, 1L, 2L, 3L, 3L, 2L) y <- c(3L, 3L, 3L, 3L, 1L, 3L) a <- split(seq_along(x), x) b <- split(seq_along(y), y)[names(a)] b[lengths(b)==0] <- NA b <- unlist(Map(`length<-`, b, lengths(a)), FALSE, FALSE) `[<-`(b, unlist(a, FALSE, FALSE), b) #[1] 1 5 NA 2 3 NA I tried to exchange the part b <- split(seq_along(y), y)[names(a)] b[lengths(b)==0] <- NA with b <- list2env(split(seq_along(y), y)) b <- mget(names(a), b, ifnotfound = NA) But it was not faster. An RCPP version. Store the indices of the second vector Γ­n a queue for each unique value in an unordered_map. Iterate over all values of the first vector and take the indices from the queue. Rcpp::sourceCpp(code=r"( #include <Rcpp.h> #include <unordered_map> #include <queue> using namespace Rcpp; // [[Rcpp::export]] IntegerVector pm(const std::vector<int>& a, const std::vector<int>& b) { IntegerVector idx(no_init(a.size())); std::unordered_map<int, std::queue<int> > lut; for(int i = 0; i < b.size(); ++i) lut[b[i]].push(i); for(int i = 0; i < idx.size(); ++i) { auto search = lut.find(a[i]); if(search != lut.end() && search->second.size() > 0) { idx[i] = search->second.front() + 1; search->second.pop(); } else {idx[i] = NA_INTEGER;} } return idx; } )") pm(x, y) #[1] 1 5 NA 2 3 NA A for this case specialized RCPP version. Create a vector of the length of the maximum value of the first vector and count how many times a value is present. Create another queue vector of the same length and sore there the indices of the values of the second vector until it has reached the number of the first. Iterate over all values of the first vector and take the indices from the queue. Rcpp::sourceCpp(code=r"( #include <Rcpp.h> #include <vector> #include <array> #include <queue> #include <algorithm> using namespace Rcpp; // [[Rcpp::export]] IntegerVector pm2(const std::vector<int>& a, const std::vector<int>& b) { IntegerVector idx(no_init(a.size())); int max = 1 + *std::max_element(a.begin(), a.end()); std::vector<int> n(max); for(int i = 0; i < a.size(); ++i) ++n[a[i]]; std::vector<std::queue<int> > lut(max); for(int i = 0; i < b.size(); ++i) { if(b[i] < max && n[b[i]] > 0) { --n[b[i]]; lut[b[i]].push(i); } } for(int i = 0; i < idx.size(); ++i) { auto & P = lut[a[i]]; if(P.size() > 0) { idx[i] = P.front() + 1; P.pop(); } else {idx[i] = NA_INTEGER;} } return idx; } )") pm2(x,y) #[1] 1 5 NA 2 3 NA Benchmark set.seed(5) x <- sample(5e3, 1e5, replace = TRUE) y <- sample(x, replace = TRUE) library(data.table) matchall <- function(x, y) { data.table(y, rowid(y))[ data.table(x, rowid(x)), on = .(y = x, V2), which = TRUE ] } rmatch <- function(x, y) { xp <- cbind(seq_along(x), x)[order(x),] yp <- cbind(seq_along(y), y)[order(y),] result <- numeric(length(x)) xi <- yi <- 1 Nx <- length(x) Ny <- length(y) while (xi <= Nx) { if (yi > Ny) { result[xp[xi,1]] <- NA xi <- xi + 1 } else if (xp[xi,2] == yp[yi,2]) { result[xp[xi,1]] = yp[yi,1] xi <- xi + 1 yi <- yi + 1 } else if (xp[xi,2] < yp[yi,2]) { result[xp[xi,1]] <- NA xi <- xi + 1 } else if (xp[xi,2] > yp[yi,2]) { yi <- yi + 1 } } result } bench::mark( ave = ave(x, x, FUN = \(v) which(y == v[1])[1:length(v)]), rmatch = rmatch(x, y), make.name = match(make.names(x, TRUE), make.names(y, TRUE)), paste = do.call(match, lapply(list(x, y), \(v) paste(v, ave(v, v, FUN = seq_along)))), make.unique = match(make.unique(as.character(x)), make.unique(as.character(y))), split = {a <- split(seq_along(x), x) b <- split(seq_along(y), y)[names(a)] b[lengths(b)==0] <- NA b <- unlist(Map(`length<-`, b, lengths(a)), FALSE, FALSE) `[<-`(b, unlist(a, FALSE, FALSE), b)}, data.table = matchall(x, y), RCPP = pm(x, y), RCPP2 = pm2(x, y) ) Result expression min median `itr/sec` mem_alloc `gc/sec` n_itr n_gc <bch:expr> <bch:tm> <bch:tm> <dbl> <bch:byt> <dbl> <int> <dbl> 1 ave 1.66s 1.66s 0.603 3.73GB 68.7 1 114 2 rmatch 258.29ms 259.35ms 3.86 5.34MB 30.8 2 16 3 make.name 155.69ms 156.82ms 6.37 14.06MB 1.59 4 1 4 paste 93.8ms 102.06ms 9.74 18.13MB 7.79 5 4 5 make.unique 81.67ms 92.8ms 10.4 9.49MB 5.22 6 3 6 split 12.66ms 13.16ms 65.8 7.18MB 16.0 33 8 7 data.table 6.22ms 6.89ms 114. 5.13MB 28.0 57 14 8 RCPP 3.06ms 3.2ms 301. 393.16KB 3.98 151 2 9 RCPP2 1.64ms 1.82ms 514. 393.16KB 8.00 257 4 In this case the C++ version is the fastest and allocates the lowest amount of memory. In case using base the splitB variant is the fastest and rmatch allocates the lowest amount of memory.
76378680
76385544
I'm trying to extract a couple of elements from XML files into an R dataframe but the parent nodes are all named the same, so I don't know how to associate child elements. I'm very new to xml (about 3 hours) so apologies if I use the wrong terminology. I did not find any R-based solutions. This is the general structure of the xml files: <Annotations> <Version>1.0.0.0</Version> <Annotation> <MicronLength>14.1593438418</MicronLength> <MicronHeight>0.0000000000</MicronHeight> <ObjIndex>1</ObjIndex> </Annotation> <Annotation> <MicronLength>5.7578076896</MicronLength> <MicronHeight>0.0000000000</MicronHeight> <ObjIndex>2</ObjIndex> </Annotation> </Annotations> There are many "Annotation" nodes. There are also several other children node names in there but they don't matter as I'm just trying to extract MicronLength and ObjIndex into a dataframe. So I need to either: Associate and get both elements from within each "Annotation" node OR Rename each "Annotation" based on the ObjIndex within (e.g. "Annotation 1", "Annotation 2", etc.) and then get parent name and child element into the df. I also have several xml files so I want to iterate over each one to eventually create a DF like the example below. | filename | ObjIndex | MicronLength | | ------------------ | -------- | ------------- | | examplefile1(.xml) | 1 | 14.1593438418 | | examplefile1 | 2 | 5.7578076896 | | examplefile2 | 1 | 12.6345661343 | The filenames (with or without extension) will then be str_split into some more columns but I can do that myself. Much appreciated!
R - How to rename xml parent node based on child element (or associate child elements)?
I have previously used xml_find_all() for this kind of simple conversion. This works as long as each Annotation node always has exactly one ObjIndex and MicronLength child node: library(xml2) xml <- read_xml(" <Annotations> <Version>1.0.0.0</Version> <Annotation> <MicronLength>14.1593438418</MicronLength> <MicronHeight>0.0000000000</MicronHeight> <ObjIndex>1</ObjIndex> </Annotation> <Annotation> <MicronLength>5.7578076896</MicronLength> <MicronHeight>0.0000000000</MicronHeight> <ObjIndex>2</ObjIndex> </Annotation> </Annotations> ") data.frame( ObjIndex = xml_integer(xml_find_all(xml, "Annotation/ObjIndex")), MicronLength = xml_double(xml_find_all(xml, "Annotation/MicronLength")) ) #> ObjIndex MicronLength #> 1 1 14.159344 #> 2 2 5.757808
76387362
76387470
How to set UID and GID for the container when Python SDK is used to spin up the container?
How to set UID and GID for the container using python sdk?
As the documentation you've linked says, pass in user and group_add to run(): client.containers.run('alpine', 'echo hello world', user='foo', group_add=[123]) Both accept both IDs and names, but group_add needs to be a list.
76387121
76387471
I want to center a text vertically such that the highest point of the text and the lowest point of text are on equal distance from the ending div that it is enclosed in. I have the following css code: padding-left: 30px; font-family: 'Playfair Display', serif; font-weight: 100; width: 140%; line-height: 68px; color: #fff; font-size: 46px; border-style: solid; border-color: #1faf2d; background-color: #1faf2d; margin-bottom: 0 !important; margin-top: 38px; This is what gives the colour and height etc to the font and it look like the below image.
How to align text vertically according to the highest point of the text and lowest point?
This can only be done by adding padding at the bottom or Top and this may vary as per different Fonts. I question is why do you want to do this? As anigning center as per your way would further discurb the balance. Also, The positioning of certain letters, such as "p," "g," "y," and "q," are lower than the center line is due to their design and the rules of typography. These letters have descenders, which are the parts of the letterforms that extend below the baseline. Descenders are common in lowercase letters and serve to maintain the overall balance and legibility of the text. They allow for differentiation between similar letterforms and help to create a visually pleasing text block. When aligning text vertically, the descenders are taken into account to ensure proper spacing and balance. By aligning the baseline of the text, which includes the descenders, it maintains the overall visual harmony of the text and prevents the descenders from interfering with the line spacing or overlapping other elements. In the case of vertically aligning text to the highest point and lowest point, the descenders will naturally make the text appear lower as they extend below the baseline. This is considered the correct alignment for maintaining legibility and preserving the intended design of the typeface. It's worth noting that the alignment of text can vary depending on the context and the specific design choices made by the typographer or designer. Different fonts may have slightly different alignments, and some artistic or decorative fonts may intentionally deviate from traditional alignment principles for creative purposes.
76388879
76388972
Whenever I change the minutes on the timer it works perfectly, but when I change the seconds on the timer, no matter what, it instantly stops. I'm not sure what I'm doing wrong. This program is apart of a codepen exercise. When I had the timer on a countdown setting it worked perfectly, but when I changed it to countup it stopped working for the seconds. window.addEventListener('DOMContentLoaded', documentLoaded, false); var startTime; var limit; var timer; function documentLoaded() { "use strict"; var timerElement = document.getElementById("timer"); timerElement.addEventListener("keydown", function (event) { if (event.key === 'Enter') { event.preventDefault(); startTimer(); timerElement.blur(); } }); } function startTimer() { startTime = new Date(); limit = parseInt(document.getElementById("timer").innerHTML); clearInterval(timer); timer = setInterval(updateTime, 1000); } function updateTime() { var currentTime = new Date(); var elapsed = (currentTime.getTime() - startTime.getTime()) / 1000; var minutes = Math.floor(elapsed / 60); var seconds = Math.floor(elapsed % 60); if (minutes < 10) { minutes = "0" + minutes; } if (seconds < 10) { seconds = "0" + seconds; } document.getElementById("timer").innerHTML = minutes + ":" + seconds; var totalSeconds = minutes * 60 + seconds; if (totalSeconds >= limit * 60) { document.getElementById("timer").classList.add("red"); clearInterval(timer); // Stop the timer } else { document.getElementById("timer").classList.remove("red"); } } body { display: flex; justify-content: center; align-items: center; } #timer-container { width: 200px; height: 200px; border-radius: 50%; background-color: #0781D4; display: flex; justify-content: center; align-items: center; } #timer { font-size: 36px; text-align: center; } .red { background-color: red; } <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Ejercio No. 3</title> <link rel="stylesheet" href="style.css"> </head> <body> <div id="timer-container"> <div id="timer" contenteditable="true">00:00</div> </div> <script src="script.js"></script> </body> </html>
Timer isn't dealing with seconds properly
You don't set the limit correctly - you set it directly to the content of the timer, which includes non numeric characters such as ":". When using parseInt on something that isn't all digits, anything after the first non numeric character is discarded. Therefore, by using anything under a minute to test with, limit will be set to 0, as "00:30" would result in 0 from parseInt, as only text from before the colon is used. To fix this, split the text at the colon and convert the minutes into seconds, as shown in the snippet let time = document.getElementById("timer").innerHTML.split(":"); //if time = "01:30", limit = 1 + 30/60 = 1.5 minutes limit = parseInt(time[0]) + parseInt(time[1])/60; window.addEventListener('DOMContentLoaded', documentLoaded, false); var startTime; var limit; var timer; function documentLoaded() { "use strict"; var timerElement = document.getElementById("timer"); timerElement.addEventListener("keydown", function (event) { if (event.key === 'Enter') { event.preventDefault(); startTimer(); timerElement.blur(); } }); } function startTimer() { startTime = new Date(); let time = document.getElementById("timer").innerHTML.split(":"); limit = parseInt(time[0]) + parseInt(time[1])/60; clearInterval(timer); timer = setInterval(updateTime, 1000); } function updateTime() { var currentTime = new Date(); var elapsed = (currentTime.getTime() - startTime.getTime()) / 1000; var minutes = Math.floor(elapsed / 60); var seconds = Math.floor(elapsed % 60); if (minutes < 10) { minutes = "0" + minutes; } if (seconds < 10) { seconds = "0" + seconds; } document.getElementById("timer").innerHTML = minutes + ":" + seconds; var totalSeconds = minutes * 60 + seconds; if (totalSeconds >= limit * 60) { document.getElementById("timer").classList.add("red"); clearInterval(timer); // Stop the timer } else { document.getElementById("timer").classList.remove("red"); } } body { display: flex; justify-content: center; align-items: center; } #timer-container { width: 200px; height: 200px; border-radius: 50%; background-color: #0781D4; display: flex; justify-content: center; align-items: center; } #timer { font-size: 36px; text-align: center; } .red { background-color: red; } <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Ejercio No. 3</title> <link rel="stylesheet" href="style.css"> </head> <body> <div id="timer-container"> <div id="timer" contenteditable="true">00:00</div> </div> <script src="script.js"></script> </body> </html>
76383134
76385590
The program runs multiple commands that require sudo privileges (such as sudo dnf update). Since the program should be installed using the go install command, it can't be run as sudo its self without configuration done by the user (afaik). The program doesn't show the output to the user to keep the output clean. To show that a process is running, it uses a spinner from the spinner library. Is it possible to do any of these things? Obtain sudo privileges from within the program Make the program runnable as sudo, even when installed using go install Show the output of the sudo command (including the password request) without it being overwritten by the spinner Here is a shortened version of what I would like my code to do: package main import ( "fmt" "os" "os/exec" "time" "github.com/briandowns/spinner" ) func main() { // Spinner to show that it's running s := spinner.New(spinner.CharSets[14], time.Millisecond*100) s.Start() // Somehow execute this with sudo _, err := exec.Command(os.Getenv("SHELL"), "-c", "dnf update -y").Output() // Stop the spinner and handle any error if err != nil { fmt.Printf("Err: %s", err) os.Exit(1) } s.Stop() // Finish fmt.Println("Success") }
Is it possible to run a sudo command in go without running the program itsself as sudo
_, err := exec.Command(os.Getenv("SHELL"), "-c", "sudo dnf update -y").Output() In this exapmle, with adding sudo before the command that you want run as sudo, and after running the program, will ask password for sudo, If you apply this to your example code you can't see password request message, because the spinner graphics will overwrite it, but if you try this without spinner graphics you can see it. Even you don't see the message, if you type your correct password and press enter your commands will work as sudo. With this, you don't need run your application as sudo. I have ran similar commands with using this and they have worked.
76387535
76387552
When trying to install sui binaries using cargo install --locked --git https://github.com/MystenLabs/sui.git --branch devnet sui as suggested by the official docs, gives the below error Updating git repository `https://github.com/MystenLabs/sui.git` error: could not find `sui` in https://github.com/MystenLabs/sui.git?branch=devnet with version `*` What could be the possible reason?
error: could not find `sui` in https://github.com/MystenLabs/sui.git?branch=devnet with version `*`
I used the below command which includes the tag to install it cargo install --locked --git https://github.com/MystenLabs/sui.git --branch devnet --tag devnet-<version> sui where you can replace version as required (e.g v1.3.0)
76387474
76387579
I am observing this weird behavior when I am raising an integer to the negative power using an np.array. Specifically, I am doing import numpy as np a = 10**(np.arange(-1, -8, -1)) and it results in the following error. ValueError: Integers to negative integer powers are not allowed. This is strange as the code 10**(-1) works fine. However the following workaround (where 10 is a float instead of integer) works fine. import numpy as np a = 10.**(np.arange(-1, -8, -1) print(a) # Prints array([1.e-01, 1.e-02, 1.e-03, 1.e-04, 1.e-05, 1.e-06, 1.e-07]) Why is it not valid for integers? Any explanation is appreciated.
Wierd behavior with raising intgers to the negative powers in python
This is happening because the input 10 is an integer. 10**(np.arange(-1, -8, -1)) numpy.arange() is designed such a way that that has to give 10**(np.arange(-1, -8, -1)) integers or nothing since input is an integer. On the contrary; a = 10.**(np.arange(-1, -8, -1) gives results happily as 10.0 is a float Edit: found an answer to back my point; voted for a duplicate: Why can't I raise to a negative power in numpy?
76383413
76385687
I have two classes in my Database in Django class Test(models.Model): sequenzaTest = models.ForeignKey("SequenzaMovimento", null=True, on_delete=models.SET_NULL) class SequenzaMovimento(models.Model): nomeSequenza = models.CharField(max_length=50, blank=False, null=False) serieDiMovimenti = models.TextField(blank=False, null=False, default="") Now, every Test object created can be associated with just one SequenzaMovimento object. Different Test objects can have the same SequenzaMovimento Now, I know the primary key of my Test. How do I find the serieDiMovimenti inside the SequenzaMovimento object which is linked to the Test object? I can get the sequenzaTest from the Test object with testo_sequenza = Test.objects.get(pk=idObj) testo_sequenza.sequenzaTest but I can't find to understand how access serieDiMovimenti
Get value of a field from a foreigKey in Django Models
This should work: try: testo_sequenza = Test.objects.get(pk=idObj) except Test.DoesNotExist: # do here what you need the program to do if not found. maybe a return if function or a continue/break if you're in a loop print("Testo Sequenza not found") sequenza_test = testo_sequenza.sequenzaTest serie_di_movimenti = sequenza_test.serieDiMovimenti Also you should check serieDiMovimenti = models.TextField(blank=False, null=False, default="") Because if you want it to be "optional field" then it would be blank=True
76388965
76389014
I am looking to programmatically apply a typically infixed operation (eg: +, -, *, /) on two integers, where the operation is specified via a string. I have had success accessing the method itself using .method This is a pattern that works, 1.+(2) which correctly resolves to 3 By extension, I'd to define a way that could take a variable string for the operator, like so: 1 + 2 as 1.method('+')(2) The above causes a syntax error, though up til the point of retrieving the method this way does work, I'm not sure what the syntax needs to be to then pass the second integer argument. e.g: 1.method('+') # <Method: Integer#+(_)> 1.method('+') 2 # syntax error, unexpected integer literal, expecting end-of-input 1.method('+')(2) # syntax error, unexpected '(', expecting end-of-input What is the right syntax to perform an operation 1 + 2 in this way? I am using: ruby 3.1.2p20 (2022-04-12 revision 4491bb740a) [x86_64-linux]
How to use integer methods using `method`, not their infix form, in Ruby
The Method class has several instance method of it's own. The one you're looking for here is call It is also aliased to [] and === 1.method('+').call(2) #=> 3 1.method('+')[2] #=> 3 1.method('+') === 2 #=> 3
76387189
76387609
I am attempting to install an R package named 'infercna', the github repository to which is linked here. The install process attempts to load another package named 'scalop', which is linked here. Specifically, this command: devtools::install_github("jlaffy/infercna") returns Downloading GitHub repo jlaffy/infercna@HEAD ── R CMD build ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── βœ” checking for file β€˜/private/var/folders/hj/1wvjfb692c3gswybcg8xdcwm0000gn/T/RtmpqQIEYL/remotes7d75586a9ac5/jlaffy-infercna-98a8db8/DESCRIPTION’ (343ms) ─ preparing β€˜infercna’: βœ” checking DESCRIPTION meta-information ... ─ checking for LF line-endings in source and make files and shell scripts ─ checking for empty or unneeded directories NB: this package now depends on R (>= 3.5.0) WARNING: Added dependency on R >= 3.5.0 because serialized objects in serialize/load version 3 cannot be read in older versions of R. File(s) containing such objects: β€˜infercna/data-raw/genes.rda’ ─ building β€˜infercna_1.0.0.tar.gz’ * installing *source* package β€˜infercna’ ... ** using staged installation ** R ** data *** moving datasets to lazyload DB ** byte-compile and prepare package for lazy loading Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : there is no package called β€˜scalop’ Calls: <Anonymous> ... loadNamespace -> withRestarts -> withOneRestart -> doWithOneRestart Execution halted ERROR: lazy loading failed for package β€˜infercna’ * removing β€˜/Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/library/infercna’ As such, I backtracked and attempted to install scalop, like so: remotes::install_github("jlaffy/scalop") This is where things start to really get hairy. To install, scalop requires 95 dependencies. Upon successful installation of all 95, the installation for scalop will eventually still fail, like so: checking for file β€˜/private/var/folders/hj/1wvjfb692c3gswybcg8xdcwm0000gn/T/RtmpqQIEYL/remotes7d757fe15404/jlaffy-scalop-021999d/DESCRIPTION’ ... ─ preparing β€˜scalop’: (385ms) βœ” checking DESCRIPTION meta-information ... ─ cleaning src ─ checking for LF line-endings in source and make files and shell scripts ─ checking for empty or unneeded directories ─ building β€˜scalop_1.1.0.tar.gz’ * installing *source* package β€˜scalop’ ... ** using staged installation ** libs using C compiler: β€˜Apple clang version 11.0.3 (clang-1103.0.32.62)’ using SDK: β€˜MacOSX10.15.sdk’ clang -arch x86_64 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I'/Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/library/Rcpp/include' -I/opt/R/x86_64/include -fPIC -falign-functions=64 -Wall -g -O2 -c init.c -o init.o clang -arch x86_64 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I'/Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/library/Rcpp/include' -I/opt/R/x86_64/include -fPIC -falign-functions=64 -Wall -g -O2 -c nd.c -o nd.o nd.c:24:10: fatal error: 'S.h' file not found #include "S.h" ^~~~~ 1 error generated. make: *** [nd.o] Error 1 ERROR: compilation failed for package β€˜scalop’ * removing β€˜/Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/library/scalop’ I am writing to ask if anyone knows enough about this output to know what to do to fix the "fatal error, 'S.h' file not found" error, which ultimately kills the download. Several people have reached out to the author, as per the issues posted on scalop; specifically issues 4 and 5, but no reply. Additionally, posting the error message into google does not return useful hits, so far as I can see. Finally, I am happy to provide any and all necessary info; e.g. sessionInfo(), R version (4.3) Mac OS (11.7) etc. Help me Stack Overflow-Kenobi, you're my only hope.
Fatal error relating to "include S.h" when installing R 'scalop' package
The "S.h" headers file is from the "S" language (the precursor to R); replacing "S.h" with "R.h" fixes the 'cant find S.h' error, but causes other issues. Clearly this package is not being maintained :( I've forked the repository and made a couple of changes to the source code (commits fe15cf9 and ab9fe5c). I successfully installed both the scalop and infercna packages via Bioconductor, but there are a lot of warnings when they compile. I used gcc to compile them, rather than Apple Clang, with these flags: cat ~/.R/Makevars LOC=/usr/local/gfortran CC=$(LOC)/bin/gcc -fopenmp CXX=$(LOC)/bin/g++ -fopenmp CXX11=$(LOC)/bin/g++ -fopenmp CFLAGS=-g -O3 -Wall -pedantic -std=gnu99 -mtune=native -pipe CXXFLAGS=-g -O3 -Wall -pedantic -std=c++11 -mtune=native -pipe LDFLAGS=-L$(LOC)/lib -Wl,-rpath,$(LOC)/lib,-L/usr/local/lib CPPFLAGS=-I$(LOC)/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/include -I/usr/local/include FLIBS=-L/usr/local/gfortran/lib/gcc/x86_64-apple-darwin19/10.2.0 -L/usr/local/gfortran/lib -lgfortran -lquadmath -lm CXX1X=/usr/local/gfortran/bin/g++ CXX98=/usr/local/gfortran/bin/g++ CXX11=/usr/local/gfortran/bin/g++ CXX14=/usr/local/gfortran/bin/g++ CXX17=/usr/local/gfortran/bin/g++ If you have problems installing the scalop package from source using Apple Clang, and you have an intel processor, my instructions for compiling R packages from source are here: https://stackoverflow.com/a/65334247/12957340 If you have an Apple silicon processor, you can try the instructions here: https://stackoverflow.com/a/68275558/12957340 This is how I installed the packages: install.packages("BiocManager") library(BiocManager) BiocManager::install("Homo.sapiens") BiocManager::install("jpmam1/scalop") # my forked copy BiocManager::install("jlaffy/infercna") The example from the vignette runs, but some of the functions no longer work as expected: library(infercna) #> #> #> Warning: replacing previous import 'AnnotationDbi::select' by 'dplyr::select' #> when loading 'scalop' #> #> Attaching package: 'infercna' #> The following object is masked from 'package:graphics': #> #> clip set.seed(1014) useGenome('hg19') #> Genome has been set to hg19 retrieveGenome() #> Retrieving: hg19 #> # A tibble: 33,575 Γ— 8 #> symbol start_position end_position chromosome_name arm band strand #> <chr> <dbl> <dbl> <fct> <fct> <chr> <int> #> 1 DDX11L1 11869 14412 1 1p p36.33 1 #> 2 WASH7P 14363 29806 1 1p p36.33 -1 #> 3 MIR1302-11 29554 31109 1 1p p36.33 1 #> 4 FAM138A 34554 36081 1 1p p36.33 -1 #> 5 OR4G4P 52473 54936 1 1p p36.33 1 #> 6 OR4G11P 62948 63887 1 1p p36.33 1 #> 7 OR4F5 69091 70008 1 1p p36.33 1 #> 8 CICP27 131025 134836 1 1p p36.33 1 #> 9 RNU6-1100P 157784 157887 1 1p p36.33 -1 #> 10 CICP7 329431 332236 1 1p p36.33 -1 #> # β„Ή 33,565 more rows #> # β„Ή 1 more variable: ensembl_gene_id <chr> m = useData(mgh125) dim(m) #> [1] 8556 1266 range(m) #> [1] 0.000 15.328 lengths(refCells) #> oligodendrocytes macrophages #> 219 707 cna = infercna(m = m, refCells = refCells, n = 5000, noise = 0.1, isLog = TRUE, verbose = FALSE) cnaM = cna[, !colnames(cna) %in% unlist(refCells)] cnaScatterPlot(cna = cna, signal.threshold = NULL, main = 'Default') obj = cnaPlot(cna = cna, order.cells = TRUE, subtitle = 'Copy-Number Aberrations in a patient with Glioblastoma') #> Error in if (class(x) == "matrix") {: the condition has length > 1 Depending on your use-case, you'll probably need to make further changes to the source code to get your desired output. If you have further errors/questions please post them in the comments and I'll take a look at them when I have some time. sessionInfo() #> R version 4.3.0 (2023-04-21) #> Platform: x86_64-apple-darwin20 (64-bit) #> Running under: macOS Ventura 13.3.1 #> #> Matrix products: default #> BLAS: /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRblas.0.dylib #> LAPACK: /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRlapack.dylib; LAPACK version 3.11.0 #> #> locale: #> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 #> #> time zone: Australia/Melbourne #> tzcode source: internal #> #> attached base packages: #> [1] stats graphics grDevices utils datasets methods base #> #> other attached packages: #> [1] infercna_1.0.0 #> #> loaded via a namespace (and not attached): #> [1] splines_4.3.0 #> [2] BiocIO_1.10.0 #> [3] bitops_1.0-7 #> [4] ggplotify_0.1.0 #> [5] filelock_1.0.2 #> [6] tibble_3.2.1 #> [7] R.oo_1.25.0 #> [8] polyclip_1.10-4 #> [9] graph_1.78.0 #> [10] reprex_2.0.2 #> [11] XML_3.99-0.14 #> [12] lifecycle_1.0.3 #> [13] rstatix_0.7.2 #> [14] edgeR_3.42.4 #> [15] Homo.sapiens_1.3.1 #> [16] lattice_0.21-8 #> [17] MASS_7.3-60 #> [18] OrganismDbi_1.42.0 #> [19] backports_1.4.1 #> [20] magrittr_2.0.3 #> [21] limma_3.56.1 #> [22] plotly_4.10.1 #> [23] rmarkdown_2.22 #> [24] yaml_2.3.7 #> [25] metapod_1.8.0 #> [26] cowplot_1.1.1 #> [27] DBI_1.1.3 #> [28] RColorBrewer_1.1-3 #> [29] abind_1.4-5 #> [30] zlibbioc_1.46.0 #> [31] Rtsne_0.16 #> [32] R.cache_0.16.0 #> [33] GenomicRanges_1.52.0 #> [34] purrr_1.0.1 #> [35] mixtools_2.0.0 #> [36] R.utils_2.12.2 #> [37] msigdbr_7.5.1 #> [38] ggraph_2.1.0 #> [39] BiocGenerics_0.46.0 #> [40] RCurl_1.98-1.12 #> [41] styler_1.10.0 #> [42] yulab.utils_0.0.6 #> [43] tweenr_2.0.2 #> [44] rappdirs_0.3.3 #> [45] GenomeInfoDbData_1.2.10 #> [46] IRanges_2.34.0 #> [47] S4Vectors_0.38.1 #> [48] enrichplot_1.20.0 #> [49] ggrepel_0.9.3 #> [50] irlba_2.3.5.1 #> [51] tidytree_0.4.2 #> [52] dqrng_0.3.0 #> [53] DelayedMatrixStats_1.22.0 #> [54] codetools_0.2-19 #> [55] DelayedArray_0.26.3 #> [56] scuttle_1.10.1 #> [57] DOSE_3.26.1 #> [58] xml2_1.3.4 #> [59] ggforce_0.4.1 #> [60] tidyselect_1.2.0 #> [61] aplot_0.1.10 #> [62] farver_2.1.1 #> [63] ScaledMatrix_1.8.1 #> [64] viridis_0.6.3 #> [65] matrixStats_0.63.0 #> [66] stats4_4.3.0 #> [67] BiocFileCache_2.8.0 #> [68] GenomicAlignments_1.36.0 #> [69] jsonlite_1.8.4 #> [70] BiocNeighbors_1.18.0 #> [71] tidygraph_1.2.3 #> [72] survival_3.5-5 #> [73] segmented_1.6-4 #> [74] tools_4.3.0 #> [75] progress_1.2.2 #> [76] treeio_1.24.1 #> [77] TxDb.Hsapiens.UCSC.hg19.knownGene_3.2.2 #> [78] Rcpp_1.0.10 #> [79] glue_1.6.2 #> [80] gridExtra_2.3 #> [81] xfun_0.39 #> [82] qvalue_2.32.0 #> [83] MatrixGenerics_1.12.0 #> [84] GenomeInfoDb_1.36.0 #> [85] dplyr_1.1.2 #> [86] withr_2.5.0 #> [87] BiocManager_1.30.20 #> [88] fastmap_1.1.1 #> [89] bluster_1.10.0 #> [90] fansi_1.0.4 #> [91] rsvd_1.0.5 #> [92] caTools_1.18.2 #> [93] digest_0.6.31 #> [94] R6_2.5.1 #> [95] gridGraphics_0.5-1 #> [96] colorspace_2.1-0 #> [97] GO.db_3.17.0 #> [98] biomaRt_2.56.0 #> [99] RSQLite_2.3.1 #> [100] R.methodsS3_1.8.2 #> [101] utf8_1.2.3 #> [102] tidyr_1.3.0 #> [103] generics_0.1.3 #> [104] data.table_1.14.8 #> [105] rtracklayer_1.60.0 #> [106] prettyunits_1.1.1 #> [107] graphlayouts_1.0.0 #> [108] httr_1.4.6 #> [109] htmlwidgets_1.6.2 #> [110] S4Arrays_1.0.4 #> [111] scatterpie_0.2.0 #> [112] pkgconfig_2.0.3 #> [113] gtable_0.3.3 #> [114] blob_1.2.4 #> [115] SingleCellExperiment_1.22.0 #> [116] XVector_0.40.0 #> [117] shadowtext_0.1.2 #> [118] clusterProfiler_4.8.1 #> [119] htmltools_0.5.5 #> [120] carData_3.0-5 #> [121] fgsea_1.26.0 #> [122] scalop_1.1.0 #> [123] RBGL_1.76.0 #> [124] scales_1.2.1 #> [125] Biobase_2.60.0 #> [126] png_0.1-8 #> [127] scran_1.28.1 #> [128] ggfun_0.0.9 #> [129] knitr_1.43 #> [130] rstudioapi_0.14 #> [131] reshape2_1.4.4 #> [132] rjson_0.2.21 #> [133] nlme_3.1-162 #> [134] curl_5.0.0 #> [135] org.Hs.eg.db_3.17.0 #> [136] cachem_1.0.8 #> [137] stringr_1.5.0 #> [138] parallel_4.3.0 #> [139] HDO.db_0.99.1 #> [140] AnnotationDbi_1.62.1 #> [141] restfulr_0.0.15 #> [142] pillar_1.9.0 #> [143] grid_4.3.0 #> [144] vctrs_0.6.2 #> [145] ggpubr_0.6.0 #> [146] BiocSingular_1.16.0 #> [147] car_3.1-2 #> [148] beachmat_2.16.0 #> [149] dbplyr_2.3.2 #> [150] cluster_2.1.4 #> [151] evaluate_0.21 #> [152] zeallot_0.1.0 #> [153] GenomicFeatures_1.52.0 #> [154] locfit_1.5-9.7 #> [155] cli_3.6.1 #> [156] compiler_4.3.0 #> [157] Rsamtools_2.16.0 #> [158] rlang_1.1.1 #> [159] crayon_1.5.2 #> [160] ggsignif_0.6.4 #> [161] plyr_1.8.8 #> [162] fs_1.6.2 #> [163] stringi_1.7.12 #> [164] viridisLite_0.4.2 #> [165] BiocParallel_1.34.2 #> [166] babelgene_22.9 #> [167] munsell_0.5.0 #> [168] Biostrings_2.68.1 #> [169] lazyeval_0.2.2 #> [170] GOSemSim_2.26.0 #> [171] Matrix_1.5-4.1 #> [172] patchwork_1.1.2 #> [173] hms_1.1.3 #> [174] sparseMatrixStats_1.12.0 #> [175] bit64_4.0.5 #> [176] ggplot2_3.4.2 #> [177] statmod_1.5.0 #> [178] KEGGREST_1.40.0 #> [179] SummarizedExperiment_1.30.1 #> [180] kernlab_0.9-32 #> [181] igraph_1.4.3 #> [182] broom_1.0.4 #> [183] memoise_2.0.1 #> [184] ggtree_3.8.0 #> [185] fastmatch_1.1-3 #> [186] bit_4.0.5 #> [187] downloader_0.4 #> [188] gson_0.1.0 #> [189] ape_5.7-1 Created on 2023-06-02 with reprex v2.0.2
76383296
76385696
I have searched widely but not found an answer to this question: Is it possible to change a variable in a Jetpack compose user interface from a broadcast receiver?
Broadcast receiver change UI
You can't modify your compose ui from Broadcast receiver directly. Instead, your Broadcast receiver should change some data in your data layer - datastore, preferences, database or just in memory in some Repository singleton class. Then you should make this data observable and observe them from your compose ui.
76389017
76389067
I have a template function template <typename B, typename A> B func(A x) {/*do something*/} And in my code I am doing something like uint32_t a[6] = {1,2,3,4,5,6}; uint64_t b = func(a); But this fails with test.cxx:10:16: error: no matching function for call to 'func' uint64_t b = func(a); ^~~~~~~~~ test.cxx:5:6: note: candidate template ignored: couldn't infer template argument 'B' So it seems that the type of a is automatically deduced but the type of b not. I can specify it (uint64_t b = <uint64_t>func(a);), sure, but why the compiler is not able to deduce it? EDIT Following comments also posting the content of the function (which is actually taking 2 other parameters) template <typename B, typename A> B bit_slice(A words[], int start, int end) { int wordSize = sizeof(A)*8; B s = 0; int n = end / wordSize; for(int i= 0; i <= n; ++i){ s = (s << wordSize) + words[i]; } s >>= (n+1) * wordSize - (end+1); B mask = (((T)1) << (end - start + 1))- 1; s &= mask; return s; }; So basically a bit slicing of an array (representing something).
Automatic template argument deduction fails for return type
If the argument cannot be deduced from func(a); then it cannot be deduced. auto return type can be deduced from the return statement, but not here. What you assign the return value to is not relevant. It simply doesnt work like this. It does work however with a templated conversion: #include <iostream> #include <cstdio> template <typename A,typename B> A the_implementation(B x) { return x*2; } template <typename B> struct proxy { B b; template <typename A> operator A() { return the_implementation<A>(b); } }; template<typename B> proxy<B> func(B x) { return {x}; } int main() { int x = 42; int y = func(x); } func returns a proxy<B>, B is deduced from x. Then proxy<B> can be converted to int via its conversion operator, which calls the actual function (because only now A and B have been deduced). However, I have doubts that the above is a good idea in your code. Consider to write instead: auto b = func<uint64_t>(a);
76381199
76387620
I am on a learning curve with Kubernetes and recently started working with K3D. I am trying to deploy my project on a local K3D cluster. When I created a pod to run an application, I saw it hang in the pending state for some time, and below is the kubectl describe pod output (events). application.yaml file's resource requirements are as below resources: requests: memory: "4Gi" cpu: "2" The output of kubectl decribe node is as below: I assumed this was due to the fact that my node has around 3 GB of memory and the app is requesting 4 GB. I am getting the error in Pod. I looked for an answer to increase the memory but had no luck so far. How can I increase the memory to get the application up and running? If I reduce the app.yaml resource to --> memory: 3 Gi or 2 Gi, the app starts running, but the actual functionality of the app is not there. Whenever I try to do something in the app, it then gives me Not enough CPU and/or memory is available for error in my application. I am running this on Linux and k3d version k3d version v5.5.1 k3s version v1.26.4-k3s1 (default) Thanks!
How to increase allocated memory in a k3d cluster
Assuming the machine where you are running has more than 3GB of RAM (you can check by running lsmem or free), you can try re-creating the Kubernetes cluster using k3d, by passing an explicit memory limit. E.g. k3d cluster create --agents-memory 8G Or if you are doing a multi-node deployment, by adding a node with sufficient memory, e.g. k3d node create --memory 8G But when running on Linux, you typically would not have a memory limit applied to the Kubernetes cluster, unless that limit was requested explicitly. So I would suggest checking your previous cluster creation commands, or double-check any scripts you may have used. If you are running Linux, another option is to run k3s directly, without k3d. That is unlikely to see limits applied to it as well. Finally, an alternative is to use an ephemeral cloud environment for this type of testing. For example, using https://namespace.so you can create a Kubernetes cluster with 8GB of RAM in a few seconds, and use it to test your application.
76382660
76385756
I'm writing a small interface in PyQT5 that has a graph that I use PyQtGraph to create. The graph starts to be drawn by pressing the "Start" button, and at first everything looks fine: But over time and an increase in the number of points, the entire graph shrinks to the width of the screen and becomes not informative: In this regard, there are two questions: How can I make the window not try to fit the whole graph at once, squeezing it, but rather move behind it, showing the latest data? Now the data comes in once a second and I redraw the graph every time. Is it possible to make it a partial update so that I just pass only new data into it? from pyqtgraph import PlotWidget import pyqtgraph from PyQt5 import QtCore from PyQt5.QtCore import Qt, QThread, QTimer, QObject, pyqtSignal, QTimer from PyQt5.QtWidgets import QHBoxLayout, QMainWindow, QPushButton, QVBoxLayout, QWidget, QApplication import sys import random def get_kl_test(): choices = [50, 50, 50, 51, 51, 51, 52, 52, 52] list = [random.choice(choices) for i in range(11)] return list def get_iopd_test(): choices = [40, 40, 40, 50, 50, 50, 60, 60, 60] return random.choice(choices) class Graph(PlotWidget): def __init__(self): super().__init__() self.setBackground('white') self.addLegend() self.showGrid(x=True, y=True) self.setYRange(0, 255, padding=0) class ReadingWorker(QObject): update_graph = pyqtSignal(list, list, list, list) def __init__(self): super().__init__() self.time_from_start = 0 self.time_values = [] self.green_values = [] self.blue_values = [] self.red_values = [] def run(self): self.read() self.update_time() def read(self): ipd_values = get_kl_test() iopd_value = get_iopd_test() self.green_values.append(ipd_values[0]) self.blue_values.append(ipd_values[1]) self.red_values.append(iopd_value) self.time_values.append(self.time_from_start) self.update_graph.emit( self.green_values, self.blue_values, self.red_values, self.time_values) QTimer.singleShot(1000, self.read) def update_time(self): self.time_from_start += 1 QTimer.singleShot(1000, self.update_time) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.central_widget = QWidget(self) self.setGeometry(50, 50, 1300, 700) self.setCentralWidget(self.central_widget) self.layout_main_window = QVBoxLayout() self.central_widget.setLayout(self.layout_main_window) # конфигурация Ρ‚ΡƒΠ»Π±Π°Ρ€Π° self.layout_toolbar = QHBoxLayout() self.layout_toolbar.addStretch(1) self.btn_start = QPushButton("Π‘Ρ‚Π°Ρ€Ρ‚") self.btn_start.clicked.connect(self.start) self.layout_toolbar.addWidget(self.btn_start) self.layout_main_window.addLayout(self.layout_toolbar) # конфигурация Π³Ρ€Π°Ρ„ΠΈΠΊΠ° self.graph = Graph() self.layout_main_window.addWidget(self.graph) def start(self): self.reading_thread = QThread(parent=self) self.reading_widget = ReadingWorker() self.reading_widget.moveToThread(self.reading_thread) self.reading_widget.update_graph.connect(self.draw_graph) self.reading_thread.started.connect(self.reading_widget.run) self.reading_thread.start() @QtCore.pyqtSlot(list, list, list, list) def draw_graph(self, ipd_1_values, ipd_2_values, iopd_values, time_values): self.graph.plotItem.clearPlots() pen_ipd_1 = pyqtgraph.mkPen(color='green', width=4) pen_ipd_2 = pyqtgraph.mkPen(color='blue', width=4, style=Qt.DashDotLine) pen_iopd = pyqtgraph.mkPen(color='red', width=4, style=Qt.DashLine) line_ipd_1 = self.graph.plotItem.addItem(pyqtgraph.PlotCurveItem( time_values, ipd_1_values, pen=pen_ipd_1, name='1' )) line_ipd_2 = self.graph.plotItem.addItem(pyqtgraph.PlotCurveItem( time_values, ipd_2_values, pen=pen_ipd_2, name='2' )) line_iopd = self.graph.plotItem.addItem(pyqtgraph.PlotCurveItem( time_values, iopd_values, pen=pen_iopd, name='3' )) if __name__ == '__main__': app = QApplication(sys.argv) app.setStyle('Fusion') main_window = MainWindow() main_window.show() sys.exit(app.exec_())
update PyqtGraph plot in PyQt5
step 1: add the PlotCurveItems as members of Mainwindow, set them up in the constructor, so you can access them later step 2: in the draw_graph function use the getData() and setData() functions of the PlotcurveItems, update them step 3: if you have enough x-values set the xRange, so not all data is shown, I use a maximal xRange of 20 here (self.window_size) In the code below I only use the last entry in your lists (e.g. ipd_1_values[-1]), you can just pass scalars and remove the [-1]. Also I used import numpy as np for the np.append(). class MainWindow(QMainWindow): def __init__(self): super().__init__() self.central_widget = QWidget(self) self.setGeometry(50, 50, 1300, 700) self.setCentralWidget(self.central_widget) self.layout_main_window = QVBoxLayout() self.central_widget.setLayout(self.layout_main_window) # конфигурация Ρ‚ΡƒΠ»Π±Π°Ρ€Π° self.layout_toolbar = QHBoxLayout() self.layout_toolbar.addStretch(1) self.btn_start = QPushButton("Π‘Ρ‚Π°Ρ€Ρ‚") self.btn_start.clicked.connect(self.start) self.layout_toolbar.addWidget(self.btn_start) self.layout_main_window.addLayout(self.layout_toolbar) # конфигурация Π³Ρ€Π°Ρ„ΠΈΠΊΠ° self.graph = Graph() self.layout_main_window.addWidget(self.graph) self.setup_graphs() # step 1 self.window_size = 20 # step 3 def start(self): self.reading_thread = QThread(parent=self) self.reading_widget = ReadingWorker() self.reading_widget.moveToThread(self.reading_thread) self.reading_widget.update_graph.connect(self.draw_graph) self.reading_thread.started.connect(self.reading_widget.run) self.reading_thread.start() def setup_graphs(self): pen_ipd_1 = pyqtgraph.mkPen(color='green', width=4) pen_ipd_2 = pyqtgraph.mkPen(color='blue', width=4, style=Qt.DashDotLine) pen_iopd = pyqtgraph.mkPen(color='red', width=4, style=Qt.DashLine) self.line_ipd_1 = pyqtgraph.PlotCurveItem([], [], pen=pen_ipd_1, name='1') self.line_ipd_2 = pyqtgraph.PlotCurveItem([], [], pen=pen_ipd_2, name='2') self.line_iopd = pyqtgraph.PlotCurveItem([], [], pen=pen_iopd, name='3') self.graph.plotItem.addItem(self.line_ipd_1) self.graph.plotItem.addItem(self.line_ipd_2) self.graph.plotItem.addItem(self.line_iopd) @QtCore.pyqtSlot(list, list, list, list) def draw_graph(self, ipd_1_values, ipd_2_values, iopd_values, time_values): # step 2 x, y = self.line_ipd_1.getData() x = np.append(x, time_values[-1]) self.line_ipd_1.setData(y=np.append(y, ipd_1_values[-1]), x=x) _, y = self.line_ipd_2.getData() self.line_ipd_2.setData(y=np.append(y, ipd_2_values[-1]), x=x) _, y = self.line_iopd.getData() self.line_iopd.setData(y=np.append(y, iopd_values[-1]), x=x) if (len(x)>0 and x[-1]-x[0]>self.window_size): # step 3 self.graph.plotItem.setXRange(x[-1]-self.window_size, x[-1])
76388928
76389086
The following is a minimum example code with the problem. struct TestView: View { @State var text = "Hello" let useCase = TestUseCase() init() { useCase.output = self } var body: some View { Text(text) .onAppear { // β‘  useCase.output = self useCase.show() } } } extension TestView: TestUseCaseOutput { func showText(text: String) { self.text = text } } class TestUseCase { var output: TestUseCaseOutput? func show() { output?.showText(text: "Changed") } } protocol TestUseCaseOutput { func showText(text: String) } This code changes the text from "Hello" to "Changed" when the view is displayed, but the change is not reflected. The showText method was called, but it had no effect. I also found that if I set the delegate at β‘ , the text was updated correctly. Can anyone tell me the cause of this problem?
Can not update @State variable via delegate set in View.init()
SwiftUI views are structs, and therefore immutable. Whenever a SwiftUI changes, a new instance of that view struct is created. When you update text, SwiftUI needs to create a new instance of TestView. But, the new instance has text set to Hello (and it was also have a new instance of TestUseCase) so you don't see any change. The sequence of events is: You create an instance of TestView - This is initialised with text = "Hello" You update text which triggers SwiftUI to recreate TestView The newly created instance of TestView is initialised with text = "Hello"` In SwiftUI no object should ever need to hold a reference to a View. init() { useCase.output = self } The self you store in your TestUseCase instance will be thrown away as soon as the view is updated. It simply isn't useful to try and hold references to SwiftUI views. You should structure your code so that views respond to changes in your model (Via @Published for example). Your model should never try and update a view directly.
76387595
76387683
react-number-format Showing Format On inserting value I have used this package for the Phone Input format import { PatternFormat } from 'react-number-format'; <PatternFormat value={value} className='form-control' format="(###) ###-####" /> Due to this format whenever I add any single value, The formatted value show in the input. before the value reaches there. I want to show the value in This format but when the value reaches there, when I have entered only a single value it should not show that '-' there. I want something like below This is the link to the npm package I am using: https://s-yadav.github.io/react-number-format/docs/intro
react-number-format Showing Format On inserting value
You could build the pattern diffrent if the value reaches said length then change (empty space) for a - (dash) import { PatternFormat } from 'react-number-format'; let pattern; if (value.length >= 9) { pattern = "(###) ###-####"; } else { pattern = "(###) ### ####"; } <PatternFormat value={value} className='form-control' format={pattern} />
76387605
76387686
I installed libssh through vcpkg on my Windows 8.1 machine. vcpkg install libssh Now I am trying to compile my c++ code making use of libssh. #include <libssh/libssh.h> #include <stdlib.h> int main() { ssh_session my_ssh_session = ssh_new(); if (my_ssh_session == NULL) exit(-1); ... ssh_free(my_ssh_session); } But I am receiving following error. D:\remoteDesktopTimeZone>gcc sample.cpp -o sampl sample.cpp:1:10: fatal error: libssh/libssh.h: No such file or directory 1 | #include <libssh/libssh.h> | ^~~~~~~~~~~~~~~~~ compilation terminated.
Compiling C++ code using libssh library through vcpkg
First, you should ensure that you are installing libraries with correct "triplet" matching your compiler and architecture. I don't know if your gcc is MingW or Cygwin. See instructions here. Second, you should either use CMake as described here, or manually point the compiler where to find the library headers and static libraries using the -I and -L command line flags.
76388987
76389120
I have an unknown number of items and item categories in a json array like so: [ { "x_name": "Some Name", "x_desc": "Some Description", "id": 1, "category": "Email" }, { "x_name": "Another name here", "x_desc": "Another description", "id": 2, "category": "Email" }, { "x_name": "Random Name", "x_desc": "Random Description", "id": 3, "category": "Email" }, { "x_name": "Owner Meetings", "x_desc": "Total count", "id": 167, "category": "Owner Specific" }, { "x_name": "Owner Tasks", "x_desc": "Total count of tasks", "id": 168, "category": "Owner Specific" }, { "x_name": "Owner Calls", "x_desc": "Total count of calls", "id": 169, "category": "Owner Specific" }, { "x_name": "Overall Total Views", "x_desc": "The total views", "id": 15, "category": "Totals Report" } ...... ] I need to group these JSONObjects based on the property "category". I've seen similar examples in JS using the reduce function but couldn't get a similar python solution. How can I efficiently do this in Python? The desired outcome would be: { "category": "Email", "points": [ { "x_name": "Some Name", "x_desc": "Some Description", "id": 1, "category": "Email" }, { "x_name": "Another name here", "x_desc": "Another description", "id": 2, "category": "Email" }, { "x_name": "Random Name", "x_desc": "Random Description", "id": 3, "category": "Email" } ] } and then: { "category": "Owner Specific", "points": [ { "x_name": "Owner Meetings", "x_desc": "Total count", "id": 167, "category": "Owner Specific" }, { "x_name": "Owner Tasks", "x_desc": "Total count of tasks", "id": 168, "category": "Owner Specific" }, { "x_name": "Owner Calls", "x_desc": "Total count of calls", "id": 169, "category": "Owner Specific" } ] } and so on. I do not know the value of the key "category" or the number of "categories" in the original JSON array.
How can I sort a JSON array by a key inside of it?
Here is a small script I made for that. Script def sort_by_category(): categories = {} output = [] a = [ { "x_name": "Some Name", "x_desc": "Some Description", "id": 1, "category": "Email", }, ... ] for i in a: if i["category"] in categories: categories[i["category"]].append(i) else: categories[i["category"]] = [i] for c in categories: o = {"category": c, "points": categories[c]} output.append(o) return(output) It browses your a array and creates another array based on categories, then it formats the output as you asked. Output [ { "category":"Email", "points":[ { "x_name":"Some Name", "x_desc":"Some Description", "id":1, "category":"Email" }, { "x_name":"Another name here", "x_desc":"Another description", "id":2, "category":"Email" }, { "x_name":"Random Name", "x_desc":"Random Description", "id":3, "category":"Email" } ] }, { "category":"Owner Specific", "points":[ { "x_name":"Owner Meetings", "x_desc":"Total count", "id":167, "category":"Owner Specific" }, { "x_name":"Owner Tasks", "x_desc":"Total count of tasks", "id":168, "category":"Owner Specific" }, { "x_name":"Owner Calls", "x_desc":"Total count of calls", "id":169, "category":"Owner Specific" } ] }, { "category":"Totals Report", "points":[ { "x_name":"Overall Total Views", "x_desc":"The total views", "id":15, "category":"Totals Report" } ] } ]
76387588
76387690
I'm using a library which requires a function with a void* pointer as a parameter. I have a 2D string array and I want to pass that array through that parameter and extract it inside the function. I successfully passed the array as a pointer but I don't know how to convert that pointer back to my array. This is my current code: String str_array[100][10]; int callback(void* data) { String* str_array_ptr[100][10] = (String* [100][10])data; (*str_array_ptr)[0][0] = "text"; return 0; } void test() { callback(&str_array); } However, when compiling, I obtain the following error message: error: ISO C++ forbids casting to an array type 'String* [100][10]' [-fpermissive] PS: I'm trying to use the SQLite library's sqlite3_exec() function and store the result of a "SELECT SQL query" into a 2D string array. SQLite C Interface - One-Step Query Execution Interface
Casting a void pointer to a 2D String array pointer (C/CPP)
You cannot cast a pointer to an array. Instead you access your array through another pointer. That pointer has type String (*)[10]. Like this String str_array[100][10]; int callback(void* data) { String (*str_array_ptr)[10] = (String (*)[10])data; str_array_ptr[0][0] = "text"; // Note no '*' return 0; } void test() { callback(str_array); // Note no '&' } Both the way you create the pointer, you don't need to use &, and the way you access the pointer, you don't need to use *, are also wrong in your code. See the code above for details. The fundamental issue here (and maybe the issue you are misunderstanding) is the difference between String *x[10]; and String (*x)[10];. In the first case x is an array of 10 pointers to String, in the second case x is a pointer to an array of ten String. It's the second option that you want.
76382813
76385918
Currently transitioning from BizTalk 2013r2 to 2020, and implementing Azure Pipelines to automate deployment with BTDF. So far, we're able to deploy our Core applications, but we've just realised there are dependencies with the 'child applications' (applications that take schemas from the Core apps). How should I refer to the Core application within Visual Studio in our dev environment (we used to reference the .dll from the local repos solution) How can we configure BTDF and CI/CD pipelines to point to the correct application when deploying the child? What happens if we need to update the Core application via BTDF/Pipeline - we'll need to undeploy all child applications that reference it before we can deploy surely - can this be done via the BTDF config?
BizTalk 2020 with BTDF & Azure Pipelines - Application dependencies
No, I don't believe that BTDF can take care of that. You should either. Version increase your assembly version number of your Core Application and do a side by side deployment (e.g. leave the original ones in place). Later on when you need the newer version in the dependent application then reference the later DLL (and yes, just have the DLL as an external assembly in the solution). Or Have your Core Application Pipeline undeploy all the dependent applications, before undeploying the Core Application and deploying it, and then re-deploying all the dependent applications. My preference would be for the first option, less complicated.
76387658
76387707
I have used a custom DisplayConverter on some columns of my Nattable.This displayconverter shows long values as hex strings. Whenever I scroll my Nattable horizontally, this converter shifts one/mulitple columns. This results in columns which show hex values to be shown in default numeral format. On the other hand, columns which should be showing numerals show hex values. In the following images, the first image shows how it should be displayed, that is column number 2 and 7 should show hex values (these are just long values with my custom converter applied). When I scroll my table to the right, this converter is then applied to column number 3 and 8. Default (as it should be) Scrolled right I have applied my CustomDisplayConverter ( column override HEX_FORMAT) to certain columns. LinkerMapHexAddressDisplayConverter is the custom display converter which converts long values to hex strings for display. 'columnLabelAccumulator.registerColumnOverrides( pnames.indexOf(ILinkerMapConstants.PROP_KEY_SECTION_SIZE), NUMBER_FORMAT);//column 3 columnLabelAccumulator.registerColumnOverrides( pnames.indexOf(ILinkerMapConstants.PROP_KEY_OBJECT_SIZE), NUMBER_FORMAT);//column 8 configRegistry.registerConfigAttribute(CellConfigAttributes.DISPLAY_CONVERTER, new DefaultLongDisplayConverter(), DisplayMode.NORMAL, NUMBER_FORMAT); columnLabelAccumulator.registerColumnOverrides( pnames.indexOf(ILinkerMapConstants.PROP_KEY_SECTION_ADDRESS), HEX_FORMAT);//column 2 columnLabelAccumulator.registerColumnOverrides( pnames.indexOf(ILinkerMapConstants.PROP_KEY_OBJECT_MODULE_ADDRESS), HEX_FORMAT);//column 7 configRegistry.registerConfigAttribute(CellConfigAttributes.DISPLAY_CONVERTER, new LinkerMapHexAddressDisplayConverter(), DisplayMode.NORMAL, HEX_FORMAT);'
Nattable Display converter shifts columns when the table is scrolled horizontally
This happens if you apply your custom labels (HEX_FORMAT in your case) on the ViewportLayer or above. If you have a strong relation on the structure, you should apply the labels on the DataLayer as there is no index-position conversion.
76383599
76386234
Subject 1: I am using Laravel version 7 in my project, and in order to add a query to all my models in Laravel, I have added a global scope to all my models. In this way, all my models inherit from another model. The mentioned model is provided below. namespace App\Models; use Illuminate\Database\Eloquent\Builder; class Model extends \Illuminate\Database\Eloquent\Model { /** * The "booting" method of the model. * @return void */ protected static function boot() { parent::boot(); if(!empty(Client::$Current_Token)) { static::addGlobalScope('client_token', function (Builder $builder) { $builder->where('client_token', Client::$Current_Token); }); } } } Subject 2: There is a model named "user" and a model named "role", and there is a many-to-many relationship between these 2 tables. Now, imagine that I want to retrieve all the roles associated with a user in the "user" model using the belongsToMany method, based on their relationship defined in the intermediate table. /** * The roles that belong to the user. */ public function roles(): BelongsToMany { return $this->belongsToMany(Role::class, 'role_user', 'user_id', 'role_id'); } Problem: I encounter the following error: SQLSTATE\[23000\]: Integrity constraint violation: 1052 Column 'client_token' in the WHERE clause is ambiguous and I know it is related to the condition I added in the global scope.
Getting SQLSTATE[23000] error with Laravel global scope in many-to-many relationship
I believe you got that error because a number of your tables are having that client_token column. So when you got a database query involving multiple tables, it just doesn't know which client_token column you are talking about. Lets create a scope class so we can access the table name of the model: <?php namespace App\Scopes; use Illuminate\Database\Eloquent\Scope; use Illuminate\Database\Eloquent\Model; use Illuminate\Database\Eloquent\Builder; class ClientTokenScope implements Scope { /** * Apply the scope to a given Eloquent query builder. * * @param \Illuminate\Database\Eloquent\Builder $builder * @param \Illuminate\Database\Eloquent\Model $model * @return void */ public function apply(Builder $builder, Model $model) { $builder->where("{$model->getTable()}.client_token", Client::$Current_Token); } } Then applying the scope in the boot method: namespace App\Models; use App\Scopes\ClientTokenScope; use Illuminate\Database\Eloquent\Builder; class Model extends \Illuminate\Database\Eloquent\Model { /** * The "booting" method of the model. * @return void */ protected static function boot() { parent::boot(); if(!empty(Client::$Current_Token)) { static::addGlobalScope(new ClientTokenScope); } } }
76389075
76389126
I'm creating a calendar for event bookings. The calendar is already working at the event registration level. I'm having trouble then showing the events registered in the database on the calendar. To show the data in the database I am trying this way: var datta = [ {PequenoAlm: "Peq_AlmoΓ§o", Valencia: "Teste1", Ano: "2023", Mes: "6", Dia: "27", }, ]; var verif = []; var verif1 = []; var verif2 = []; var verif3 = []; var verif4 = []; for (var i = 0; i < datta.length; i++) { var PequenoAlm = datta[0].PequenoAlm; var Valencia = datta[0].Valencia; var Ano = datta[0].Ano; var Mes = datta[0].Mes; var Dia = datta[0].Mes; verif.push(PequenoAlm); verif1.push(Valencia); verif2.push(Ano); verif3.push(Mes); verif4.push(Dia); } var event_data = { "events": [ { "occasion": verif, "invited_count": verif1, "year": verif2, "month": verif3, "day": verif4, "cancelled": true } ] }; The problem is that it doesn't show any information on the calendar. I leave the complete code of how I am doing it and with the part of the code that I am trying to return the data from the database to the calendar. Snippet below: $(document).ready(function() { var date = new Date(); var today = date.getDate(); $(".right-button").click({ date: date }, next_year); $(".left-button").click({ date: date }, prev_year); $(".month").click({ date: date }, month_click); $(".right-button1").click({ date: date }, next_mes); $(".left-button1").click({ date: date }, prev_mes); $("#add-button").click({ date: date }, new_event); $(".months-row").children().eq(date.getMonth()).addClass("active-month"); init_calendar(date); var events = check_events(today, date.getMonth() + 1, date.getFullYear()); show_events(events, months[date.getMonth()], today); }); function init_calendar(date) { $(".tbody").empty(); $(".events-container").empty(); var calendar_days = $(".tbody"); var month = date.getMonth(); var year = date.getFullYear(); var day_count = days_in_month(month, year); var row = $("<tr class='table-row'></tr>"); var today = date.getDate(); date.setDate(1); var first_day = date.getDay(); for (var i = 0; i < 35 + first_day; i++) { var day = i - first_day + 1; if (i % 7 === 0) { calendar_days.append(row); row = $("<tr class='table-row'></tr>"); } if (i < first_day || day > day_count) { var curr_date = $("<td class='table-date nil'>" + "</td>"); row.append(curr_date); } else { var curr_date = $("<td class='table-date'>" + day + "</td>"); var events = check_events(day, month + 1, year); if (today === day && $(".active-date").length === 0) { curr_date.addClass("active-date"); show_events(events, months[month], day); } if (events.length !== 0) { curr_date.addClass("event-date"); } curr_date.click({ events: events, month: months[month], day: day }, date_click); row.append(curr_date); } } calendar_days.append(row); $(".year").text(year); calendar_days.append(row); if (month == 0) { $(".mes").text("Janeiro"); } if (month == 1) { $(".mes").text("Fevereiro"); } if (month == 2) { $(".mes").text("MarΓ§o"); } if (month == 3) { $(".mes").text("Abril"); } if (month == 4) { $(".mes").text("Maio"); } if (month == 5) { $(".mes").text("Junho"); } if (month == 6) { $(".mes").text("Julho"); } if (month == 7) { $(".mes").text("Agosto"); } if (month == 8) { $(".mes").text("Setembro"); } if (month == 9) { $(".mes").text("Outubro"); } if (month == 10) { $(".mes").text("Novembro"); } if (month == 11) { $(".mes").text("Dezembro"); } } function days_in_month(month, year) { var monthStart = new Date(year, month, 1); var monthEnd = new Date(year, month + 1, 1); return (monthEnd - monthStart) / (1000 * 60 * 60 * 24); } function date_click(event) { $(".events-container").show(250); $("#diaalog").hide(250); $(".active-date").removeClass("active-date"); $(this).addClass("active-date"); show_events(event.data.events, event.data.month, event.data.day); }; function month_click(event) { $(".events-container").show(250); $("#diaalog").hide(250); var date = event.data.date; $(".active-month").removeClass("active-month"); $(this).addClass("active-month"); var new_month = $(".month").index(this); date.setMonth(new_month); init_calendar(date); } function next_year(event) { $("#diaalog").hide(250); var date = event.data.date; var new_year = date.getFullYear() + 1; $("year").html(new_year); date.setFullYear(new_year); init_calendar(date); } function prev_year(event) { $("#diaalog").hide(250); var date = event.data.date; var new_year = date.getFullYear() - 1; $("year").html(new_year); date.setFullYear(new_year); init_calendar(date); } function next_mes(event) { $("#diaalog").hide(250); var date = event.data.date; var new_mes = date.getMonth() + 1; $("mes").html(new_mes); date.setMonth(new_mes); init_calendar(date); } function prev_mes(event) { $("#diaalog").hide(250); var date = event.data.date; var new_mes = date.getMonth() - 1; $("mes").html(new_mes); date.setMonth(new_mes); init_calendar(date); } function new_event(event) { if ($(".active-date").length === 0) return; $("inpuut").click(function() { $(this).removeClass("error-inpuut"); }) $("#diaalog input[type=text]").val(''); $("#diaalog input[type=number]").val(''); $(".events-container").hide(250); $("#diaalog").show(250); $("#cancel-button").click(function() { $("#reff").removeClass("error-inpuut"); $("#reff1").removeClass("error-inpuut"); $("#reff2").removeClass("error-inpuut"); $("#almm").removeClass("error-inpuut"); $("#almm1").removeClass("error-inpuut"); $("#almm2").removeClass("error-inpuut"); $("#almm3").removeClass("error-inpuut"); $("#valref").removeClass("error-inpuut"); $("#Dataref").removeClass("error-inpuut"); $("#diaalog").hide(250); $(".events-container").show(250); }); } function show_events(events, month, day) { $(".events-container").empty(); $(".events-container").show(250); console.log(event_data["events"]); if (events.length === 0) { var event_card = $("<div class='event-card'></div>"); var event_name = $("<div class='event-name'>NΓ£o hΓ‘ refeiçáes marcadas para " + day + " " + month + ".</div>"); $(event_card).css({ "border-left": "10px solid #FF1744" }); $(event_card).append(event_name); $(".events-container").append(event_card); } else { for (var i = 0; i < events.length; i++) { var event_card = $("<div class='event-card'></div>"); var event_name = $("<div class='event-name'>" + events[i]["occasion"] + ":</div>"); var event_count = $("<div class='event-count'>" + events[i]["invited_count"] + " Invited</div>"); if (events[i]["cancelled"] === true) { $(event_card).css({ "border-left": "10px solid #FF1744" }); event_count = $("<div class='event-cancelled'>Cancelled</div>"); } $(event_card).append(event_name).append(event_count); $(".events-container").append(event_card); } } } function check_events(day, month, year) { var events = []; for (var i = 0; i < event_data["events"].length; i++) { var event = event_data["events"][i]; if (event["day"] === day && event["month"] === month && event["year"] === year) { events.push(event); } } return events; } var datta = [{ PequenoAlm: "Peq_AlmoΓ§o", Valencia: "Teste1", Ano: "2023", Mes: "6", Dia: "27", }, ]; var verif = []; var verif1 = []; var verif2 = []; var verif3 = []; var verif4 = []; for (var i = 0; i < datta.length; i++) { var PequenoAlm = datta[0].PequenoAlm; var Valencia = datta[0].Valencia; var Ano = datta[0].Ano; var Mes = datta[0].Mes; var Dia = datta[0].Mes; verif.push(PequenoAlm); verif1.push(Valencia); verif2.push(Ano); verif3.push(Mes); verif4.push(Dia); } var event_data = { "events": [{ "occasion": verif, "invited_count": verif1, "year": verif2, "month": verif3, "day": verif4, "cancelled": true }] }; const months = [ "Janeiro", "Fevereiro", "MarΓ§o", "Abril", "maio", "Junho", "Julho", "Agosto", "Setembro", "Outubro", "Novembro", "Dezembro" ]; .conteent { overflow: none; max-width: 790px; padding: 0px 0; height: 500px; position: relative; margin: 20px auto; background: #52A0FD; background: -moz-linear-gradient(right, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); background: -webkit-linear-gradient(right, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); background: linear-gradient(to left, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); border-radius: 3px; box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); -moz-box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); -webkit-box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); } /* Events display */ .events-container { overflow-y: scroll; height: 100%; margin: 0px auto; font: 13px Helvetica, Arial, sans-serif; display: inline-block; padding: 0 10px; border-bottom-right-radius: 3px; border-top-right-radius: 3px; } .events-container:after { clear: both; } .event-card { padding: 20px 0; width: 350px; margin: 20px auto; display: block; background: #fff; border-left: 10px solid #52A0FD; border-radius: 3px; box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); -moz-box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); -webkit-box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); } .event-count, .event-name, .event-cancelled { display: inline; padding: 0 10px; font-size: 1rem; } .event-count { color: #52A0FD; text-align: right; } .event-name { padding-right: 0; text-align: left; } .event-cancelled { color: #FF1744; text-align: right; } /* Calendar wrapper */ .calendar-container { position: relative; margin: 0px auto; height: 100%; background: #fff; font: 13px Helvetica, Arial, san-serif; display: inline-block; border-bottom-left-radius: 3px; border-top-left-radius: 3px; } .calendar-container:after { clear: both; } .calendar { display: table; } /* Calendar Header */ .year-header { background: #52A0FD; background: -moz-linear-gradient(left, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); background: -webkit-linear-gradient(left, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); background: linear-gradient(to right, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); font-family: Helvetica; box-shadow: 0 3px 6px rgba(0, 0, 0, 0.16), 0 3px 6px rgba(0, 0, 0, 0.23); -moz-box-shadow: 0 3px 6px rgba(0, 0, 0, 0.16), 0 3px 6px rgba(0, 0, 0, 0.23); -webkit-box-shadow: 0 3px 6px rgba(0, 0, 0, 0.16), 0 3px 6px rgba(0, 0, 0, 0.23); height: 40px; text-align: center; position: relative; color: #fff; border-top-left-radius: 3px; } .year-header span { display: inline-block; font-size: 20px; line-height: 40px; } .left-button, .right-button { cursor: pointer; width: 28px; text-align: center; position: absolute; } .left-button1, .right-button1 { cursor: pointer; width: 28px; text-align: center; position: absolute; } .left-button { left: 0; -webkit-border-top-left-radius: 5px; -moz-border-radius-topleft: 5px; border-top-left-radius: 5px; } .left-button1 { left: 0; -webkit-border-top-left-radius: 5px; -moz-border-radius-topleft: 5px; border-top-left-radius: 5px; } .right-button { right: 0; top: 0; -webkit-border-top-right-radius: 5px; -moz-border-radius-topright: 5px; border-top-right-radius: 5px; } .right-button1 { right: 0; top: 0; -webkit-border-top-right-radius: 5px; -moz-border-radius-topright: 5px; border-top-right-radius: 5px; } .left-button:hover { background: #3FADFF; } .left-button1:hover { background: #3FADFF; } .right-button:hover { background: #00C1FF; } .right-button1:hover { background: #00C1FF; } .ajustebot { margin-top: -5%; } /* Buttons */ .bbuutton { cursor: pointer; -webkit-appearance: none; -moz-appearance: none; appearance: none; outline: none; font-size: 1rem; border-radius: 25px; padding: 0.65rem 1.9rem; transition: .2s ease all; color: white; border: none; box-shadow: -1px 10px 20px #9BC6FD; background: #52A0FD; background: -moz-linear-gradient(left, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); background: -webkit-linear-gradient(left, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); background: linear-gradient(to right, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); } #cancel-button { box-shadow: -1px 10px 20px #FF7DAE; background: #FF1744; background: -moz-linear-gradient(left, #FF1744 0%, #FF5D95 80%, #FF5D95 100%); background: -webkit-linear-gradient(left, #FF1744 0%, #FF5D95 80%, #FF5D95 100%); background: linear-gradient(to right, #FF1744 0%, #FF5D95 80%, #FF5D95 100%); } #add-button { display: block; position: absolute; right: 20px; bottom: 20px; } #add-button:hover, #ok-button:hover, #cancel-button:hover { transform: scale(1.03); } #add-button:active, #ok-button:active, #cancel-button:active { transform: translateY(3px) scale(.97); } /* Days/months tables */ .days-table, .dates-table { border-collapse: separate; text-align: center; } .day { height: 26px; width: 26px; padding: 0 10px; line-height: 26px; border: 2px solid transparent; text-transform: uppercase; font-size: 90%; color: #9e9e9e; } .active-month { font-weight: bold; font-size: 14px; color: #FF1744; text-shadow: 0 1px 4px RGBA(255, 50, 120, .8); } /* Dates table */ .table-date { cursor: default; color: #2b2b2b; height: 26px; width: 26px; font-size: 15px; padding: 10px; line-height: 26px; text-align: center; border-radius: 50%; border: 2px solid transparent; transition: all 250ms; } .table-date:not(.nil):hover { border-color: #FF1744; box-shadow: 0 2px 6px RGBA(255, 50, 120, .9); } .event-date { border-color: #52A0FD; box-shadow: 0 2px 8px RGBA(130, 180, 255, .9); } .active-date { background: #FF1744; box-shadow: 0 2px 8px RGBA(255, 50, 120, .9); color: #fff; } .event-date.active-date { background: #52A0FD; box-shadow: 0 2px 8px RGBA(130, 180, 255, .9); } /* input dialog */ .diaalog { z-index: 5; background: #fff; position: absolute; width: 438px; height: 500px; left: 352px; border-top-right-radius: 3px; border-bottom-right-radius: 3px; display: none; border-left: 1px #aaa solid; top: 0%; } .diaalog-header { margin: 20px; color: #333; text-align: center; } .form-ccontainer { margin-top: 5%; } .form-labeel { color: #333; } .inpuut { border: none; background: none; border-bottom: 1px #aaa solid; display: block; margin-bottom: 50px; width: 200px; height: 40px; text-align: center; transition: border-color 250ms; } .inpuut:focus { outline: none; border-color: #00C9FB; } .error-inpuut { border-color: #FF1744; } /* Tablets and smaller */ @media only screen and (max-width: 780px) { .conteent { overflow: visible; position: relative; max-width: 100%; width: 370px; height: 100%; background: #52A0FD; background: -moz-linear-gradient(left, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); background: -webkit-linear-gradient(left, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); background: linear-gradient(to right, #52A0FD 0%, #00C9FB 80%, #00C9FB 100%); } .diaalog { width: 370px; height: 450px; border-radius: 3px; top: 0; left: 0; } .events-container { float: none; overflow: visible; margin: 0 auto; padding: 0; display: block; left: 0; border-radius: 3px; } .calendar-container { float: none; padding: 0; margin: 0 auto; margin-right: 0; display: block; left: 0; border-radius: 3px; box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); -moz-box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); -webkit-box-shadow: 3px 8px 16px rgba(0, 0, 0, 0.19), 0 6px 6px rgba(0, 0, 0, 0.23); } } /* Small phone screens */ @media only screen and (max-width: 400px) { .conteent, .events-container, .year-header, .calendar-container { width: 320px; } .diaalog { width: 320px; } .days-table { width: 320px; padding: 7px 7px; } .event-card { width: 300px; } .day { padding: 7px 7px; } .table-date { width: 320px; height: 20px; line-height: 20px; } .event-name, .event-count, .event-cancelled { font-size: .8rem; } #add-button { bottom: 2px; right: 5px; padding: 0.5rem 1.5rem; } .ajustebot { margin-top: -12%; } } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class="conteent"> <div class="calendar-container"> <div class="calendar"> <div class="year-header"> <span class="left-button" id="prev"> &lang; </span> <span class="year" id="label"></span> <span class="right-button" id="next"> &rang; </span> </div> <div class="year-header"> <span class="left-button1" id="prev1"> &lang; </span> <span class="mes" id="label"></span> <span class="right-button1" id="next1"> &rang; </span> </div> <table class="days-table"> <td class="day">Dom</td> <td class="day">Seg</td> <td class="day">Ter</td> <td class="day">Qua</td> <td class="day">Qui</td> <td class="day">Sex</td> <td class="day">Sab</td> </table> <div class="frame"> <table class="dates-table"> <tbody class="tbody"> </tbody> </table> </div> <button class="bbuutton" id="add-button">Marcação</button> </div> </div> <div class="events-container"></div> <div class="diaalog" id="diaalog"> <h2 class="diaalog-header"> Adicionar Nova Refeição </h2> <form class="fform" id="fform"> <div class="form-ccontainer" align="center"> <p class="form-labeel">Pequenas Refeiçáes <span style="color: red;">*</span></p> <div class="radio_containner"> <input type="checkbox" class="inradio" name="reff" id="reff" value="Peq_AlmoΓ§o"> <label for="reff" class="labradio">Pequeno-AlmoΓ§o</label> <input type="checkbox" class="inradio" name="reff1" id="reff1" value="Lanche"> <label for="reff1" class="labradio" style="margin-left: 3%;">Lanche</label> <input type="checkbox" class="inradio" name="reff2" id="reff2" value="Ceia"> <label for="reff2" class="labradio" style="margin-left: 3%;">Ceia</label> </div> <p class="form-labeel">Refeição <span style="color: red;">*</span></p> <div class="radio_containner"> <input type="checkbox" class="inradio" name="almm" id="almm" value="AlmoΓ§o"> <label for="almm" class="labradio">AlmoΓ§o</label> <input type="checkbox" class="inradio" name="almm1" id="almm1" value="AlmoΓ§o_(Dieta)"> <label for="almm1" class="labradio" style="margin-left: 3%;">AlmoΓ§o Dieta</label> <input type="checkbox" class="inradio" name="almm2" id="almm2" value="Jantar"> <label for="almm2" class="labradio" style="margin-left: 3%;">Jantar</label> <input type="checkbox" class="inradio" name="almm3" id="almm3" value="Jantar_(Dieta)"> <label for="almm3" class="labradio" style="margin-left: 3%;">Jantar Dieta</label> </div> <div class="form-group"> <p class="form-labeel"> ValΓͺncia <span style="color: red;">*</span></p> <select class="js-states form-control ajuste sssinglett" name="valref" id="valref"> <option></option> <option value="3" selected> ERPI</option> </select> </div> <label for="Dataref" class="form-labeel">PerΓ­odo de Marcação </label> <input type="date" class="inpuut" name="Dataref" id="Dataref"> <div class="ajustebot"> <input type="button" value="Cancel" class="bbuutton" id="cancel-button"> <input type="button" value="OK" class="bbuutton" id="ok-button"> </div> </div> </form> </div> </div> codepen Can you help overcome this difficulty? I'm trying like this: var event_data = { "events": [] }; $(document).ready(function () { $.getJSON('consrefeicoes.php', function (datta) { for (var i = 0; i < datta.length; i++) { PequenoAlm = datta[i].PequenoAlm; Valencia = datta[i].Valencia; Ano = datta[i].Ano; mes = datta[i].mes; dia = datta[i].dia; event_data.events.push({ "occasion": PequenoAlm, "invited_count": Valencia, "year": Number(Ano), "month": Number(mes), "day": Number(dia), "cancelled": true }) }; }); }); php: $Colaborador = $_SESSION['usuarioId']; $query = $conn->prepare("SELECT PequenoAlm, Alm, Lan, jant, Ceia, Valencia, YEAR(Data) AS Ano, Colaborador, MONTH(Data) AS mes, DAY(Data) AS dia FROM raddb.MarcErpi WHERE raddb.MarcErpi.Colaborador = ?"); $query->execute([$Colaborador]); $json = []; while($row=$query->fetch(PDO::FETCH_ASSOC)){ extract($row); $json[]= ['PequenoAlm' =>(string)$PequenoAlm, 'Alm' =>(string)$Alm, 'Lan' =>(string)$Lan, 'jant' =>(string)$jant, 'Ceia' =>(string)$Ceia, 'Valencia' =>(string)$Valencia, 'Ano' =>(string)$Ano, 'Colaborador' =>(string)$Colaborador, 'mes' =>(string)$mes, 'dia' =>(string)$dia]; } echo json_encode($json); (3) [{…}, {…}, {…}] 0 : {PequenoAlm: 'Peq_AlmoΓ§o', Alm: 'AlmoΓ§o', Lan: 'Lanche', jant: '', Ceia: '', …} 1 : {PequenoAlm: 'Peq_AlmoΓ§o', Alm: 'AlmoΓ§o', Lan: '', jant: '', Ceia: '', …} 2 : {PequenoAlm: '', Alm: '', Lan: 'Lanche', jant: 'Jantar', Ceia: '', …} length : 3 [[Prototype]] : Array(0)
Return data from database and show in calendar
You are configuring your event incorrectly - when defining event_data, you define a single event that has arrays of all of the events. For example, the first event has the date of all of them. Since there is only one, you end up with day = ["23"] instead of 23, which is breaking the calendar code. You also need to convert each element to a number, as they are currently strings. Here is the correction: var event_data = { "events": [] }; for (var i = 0; i < verif.length; i++) { event_data.events.push({ "occasion": verif[i], "invited_count": Number(verif1[i]), "year": Number(verif2[i]), "month": Number(verif3[i]), "day": Number(verif4[i]), "cancelled": true }); } This code adds an event to event data for each item in verif. In addition, earlier in your code, you define Dia incorrectly, with the month rather than the day. Correction: var Dia = datta[0].Dia; rather than var Dia = datta[0].Mes; Updated pen: https://codepen.io/CoderMuffin/pen/RweXKJM
76381797
76387767
I want to get the Calendar Events of all of the members in the organization/team via Power Automate. As I see now the "Get Calendars (V2)" and the "Get Calendar View of Events (V3)" only return the corresponding values of the user itself (the one who owns the flow). I was wondering if there's a way to get the calendar events of the group members given the fact that they've given the corresponding permission to share all the calendar data with the organization members. Any help on this will be much appreciated. Thanks!
Get Calendar Events of all group members via Power Automate
The built-in connector will use the current user. I don't think you can impersonate a user this way. The best approach or alternative is to use Graph API with a servcie account (Azure AD app registration). You can get events from anyone, so from each users part of a group. GET /users/{id | userPrincipalName}/calendar/events GET /groups/{id}/members To call Graph API with Power Automate you will need HTTP connector (it's a premium connector and the connector must be enable for your environment) You have to declare a service account in Azure AD (or Microsoft Entra): Create a new app registration (Applications > App registrations) Copy the client id Copy the tenant id Add the following permissions (Applications > App registrations > API permissions) Graph API > Application > Calendars.Read.All Graph API > Application > Directory.Read.All Graph API > Application > GroupMember.Read.All Grant admin consent for the permissions Generate a client secret (Applications > App registrations > Certificate & secrets > Client secrets) In Power Automate, create a new cloud flow, for example a button with the group id as input. Declare variables: Client ID on the app registration Client secret for the app registration Tenant ID Array to store all the events Next, call Graph API to get group members with the generic HTTP connector Method: GET URI: https://graph.microsoft.com/v1.0/groups/<your group id input>/members Headers: Content-Type application/json Headers: Accept application/json Authentication: Active Directory OAuth Tenant: your tenant id variable Audience: https://graph.microsoft.com Client ID: your client id variable Crendential Type: Secret Secret: your client secret variable Transform the result in JSON using the schema: { "type": "object", "properties": { "@@odata.context": { "type": "string" }, "value": { "type": "array", "items": { "type": "object", "properties": { "id": { "type": "string" }, "mail": { "type": "string" } }, "required": [ "id", "mail" ] } } } } For each members, call Graph API to get user's events Method: GET URI: https://graph.microsoft.com/v1.0/users/<user id fron json foreach>/event Headers: Content-Type application/json Headers: Accept application/json Authentication: Active Directory OAuth Tenant: your tenant id variable Audience: https://graph.microsoft.com Client ID: your client id variable Crendential Type: Secret Secret: your client secret variable Transform the result in JSON using the schema: { "type": "object", "properties": { "@@odata.context": { "type": "string" }, "value": { "type": "array", "items": { "type": "object", "properties": { "@@odata.etag": { "type": "string" }, "id": { "type": "string" }, "createdDateTime": { "type": "string" }, "lastModifiedDateTime": { "type": "string" }, "changeKey": { "type": "string" }, "categories": { "type": "array" }, "transactionId": { "type": "string" }, "originalStartTimeZone": { "type": "string" }, "originalEndTimeZone": { "type": "string" }, "iCalUId": { "type": "string" }, "reminderMinutesBeforeStart": { "type": "integer" }, "isReminderOn": { "type": "boolean" }, "hasAttachments": { "type": "boolean" }, "subject": { "type": "string" }, "bodyPreview": { "type": "string" }, "importance": { "type": "string" }, "sensitivity": { "type": "string" }, "isAllDay": { "type": "boolean" }, "isCancelled": { "type": "boolean" }, "isOrganizer": { "type": "boolean" }, "responseRequested": { "type": "boolean" }, "seriesMasterId": {}, "showAs": { "type": "string" }, "type": { "type": "string" }, "webLink": { "type": "string" }, "onlineMeetingUrl": {}, "isOnlineMeeting": { "type": "boolean" }, "onlineMeetingProvider": { "type": "string" }, "allowNewTimeProposals": { "type": "boolean" }, "occurrenceId": {}, "isDraft": { "type": "boolean" }, "hideAttendees": { "type": "boolean" }, "responseStatus": { "type": "object", "properties": { "response": { "type": "string" }, "time": { "type": "string" } } }, "body": { "type": "object", "properties": { "contentType": { "type": "string" }, "content": { "type": "string" } } }, "start": { "type": "object", "properties": { "dateTime": { "type": "string" }, "timeZone": { "type": "string" } } }, "end": { "type": "object", "properties": { "dateTime": { "type": "string" }, "timeZone": { "type": "string" } } }, "location": { "type": "object", "properties": { "displayName": { "type": "string" }, "locationType": { "type": "string" }, "uniqueIdType": { "type": "string" }, "address": { "type": "object", "properties": {} }, "coordinates": { "type": "object", "properties": {} } } }, "locations": { "type": "array" }, "recurrence": {}, "attendees": { "type": "array" }, "organizer": { "type": "object", "properties": { "emailAddress": { "type": "object", "properties": { "name": { "type": "string" }, "address": { "type": "string" } } } } }, "onlineMeeting": {}, "[email protected]": { "type": "string" }, "[email protected]": { "type": "string" } }, "required": [ "@@odata.etag", "id", "createdDateTime", "lastModifiedDateTime", "changeKey", "categories", "transactionId", "originalStartTimeZone", "originalEndTimeZone", "iCalUId", "reminderMinutesBeforeStart", "isReminderOn", "hasAttachments", "subject", "bodyPreview", "importance", "sensitivity", "isAllDay", "isCancelled", "isOrganizer", "responseRequested", "seriesMasterId", "showAs", "type", "webLink", "onlineMeetingUrl", "isOnlineMeeting", "onlineMeetingProvider", "allowNewTimeProposals", "occurrenceId", "isDraft", "hideAttendees", "responseStatus", "body", "start", "end", "location", "locations", "recurrence", "attendees", "organizer", "onlineMeeting", "[email protected]", "[email protected]" ] } } } } Activate and start the flow with a group id as input:
76387472
76387866
My goal is to change value 99999 with the value adjacent to it unless it's 99999 again. I took the advice from here before, now I am having a new problem. MRE: 'as' is a dataframe with 9 different cohort datasets; 10030 obs of 7060 variables. I am mainly (as of now) dealing with as$AS1_WEIGHT ... as$AS9_WEIGHT > as %>% + select(starts_with("AS") & ends_with("_WEIGHT")) %>% head() %>% dput() structure(list(AS1_WEIGHT = c(72, 59, 50, 55.2, 82.1, 50.4), AS2_WEIGHT = c(74.8, NA, NA, 54.8, 84.5, 52.5), AS3_WEIGHT = c(75.2, NA, NA, 55.9, 81.7, 54.6), AS4_WEIGHT = c(75, NA, NA, 55.1, 80.6, NA), AS5_WEIGHT = c(75.4, NA, NA, 58.8, 89.5, NA), AS6_WEIGHT = c(77.3, NA, NA, NA, NA, NA), AS7_WEIGHT = c(70.7, NA, NA, 56, NA, NA), AS8_WEIGHT = c(73.8, NA, NA, 55.5, NA, NA), AS9_WEIGHT = c(74.5, NA, NA, 54.8, NA, 52)), row.names = c(NA, -6L), class = c("tbl_df", "tbl", "data.frame")) as %<>% mutate(row = row_number()) %>% tidyr::pivot_longer(starts_with("AS") & ends_with("_WEIGHT")) %>% mutate(value = if_else(value == '99999', lead(value), value), .by = row) %>% pivot_wider(names_from = name, values_from = value) returns: Error in tidyr::pivot_longer(): ! Names must be unique. βœ– These names are duplicated: "name" at locations 7049 and 7053. "value" at locations 7050 and 7054. β„Ή Use argument names_repair to specify repair strategy. Run rlang::last_trace() to see where the error occurred. So I ran this code to see which columns are duplicated: > dup_col <- duplicated(base::as.list(as)) colnames(as[dup_col]) character(0) I ran another code to see if I am referring to the right columns > as %>% select(starts_with("AS") & ends_with("_WEIGHT")) %>% colnames() [1] "AS1_WEIGHT" "AS2_WEIGHT" "AS3_WEIGHT" "AS4_WEIGHT" "AS5_WEIGHT" "AS6_WEIGHT" "AS7_WEIGHT" "AS8_WEIGHT" [9] "AS9_WEIGHT" Thank you in advance!
tidyr::pivot_longer() with duplicate problems with no apparent duplicate column names or dataset in R
I suspect you already have a column named name or value before you run pivot_longer, which by default tries to create columns with those names. As noted here, the error message isn't necessarily clear that's the problem. Try grep("name", colnames(as)) and grep("value", colnames(as)) to find those columns. Either rename in your data frame or use pivot_longer( ... names_to = "a_new_name_col", values_to = "a_new_value_col") data.frame(a = 1:2, name = 3:4, value = 7:8) %>% tidyr::pivot_longer(a) #Error in `vec_cbind()`: #! Names must be unique. #βœ– These names are duplicated: # * "name" at locations 1 and 3. # * "value" at locations 2 and 4. #β„Ή Use argument `names_repair` to specify repair strategy. #Run `rlang::last_trace()` to see where the error occurred. data.frame(a = 1:2, name2 = 3:4, value2 = 7:8) %>% tidyr::pivot_longer(a) ## A tibble: 2 Γ— 4 # name2 value2 name value # <int> <int> <chr> <int> #1 3 7 a 1 #2 4 8 a 2
76383552
76386291
I have a has_many_through association where Users have many Projects through ProjectUsers. Rails has some magic that allows updating this relationship with: u = User.first u.update(project_ids: [...]) Is there a clean way to do the same thing with create? Running User.create(name: ..., project_ids: [...]) fails with Validation failed: Project users is invalid. I suspect this is because Rails tries to create the ProjectUser record before creating the User record and has some built-in validation on join tables to validate that both sides of the join already exist. The ProjectUser model has no custom validation. class ProjectUser < ApplicationRecord belongs_to :project belongs_to :user end Is there a simple way to get around this?
Create Record With "has_many_through" Association – Ruby on Rails
Active Record supports automatic identification for most associations with standard names. However, Active Record will not automatically identify bi-directional associations that contain the :through or :foreign_key options. (You can check here) So you have to define inverse_of explicitly. class Project < ApplicationRecord has_many :project_users, foreign_key: :project_id, inverse_of: :project has_many :users, through: :project_users end class User < ApplicationRecord has_many :project_users, foreign_key: :user_id, inverse_of: :user has_many :projects, through: :project_users end class ProjectUser < ApplicationRecord belongs_to :project belongs_to :user end
76387583
76387869
Can anyone explain why the SQL function (using SQL Server 2019) returns results that to me appear to be counter intuitive? Here are the queries and the scores: SELECT DIFFERENCE('Good', 'Good Samaritans'); --Result is 4 (High score match) SELECT DIFFERENCE('Samaritans', 'Good Samaritans'); --Result is 1 (Low score match) SELECT DIFFERENCE('Sam', 'Good Samaritans'); --Result is 2 (A higher score than above!) I understand DIFFERENCE uses SOUNDEX to match consonants phonetically, but the results above seem very odd particularly with the second query. Is it something to do with the space and the proceeding string?
SQL Function DIFFERENCE returns interesting scores
If you change the order of words you will see there is a bias on the first word. Then if you consider SOUNDEX you will begin to understand why. Also read the reference below. SOUNDEX converts an alphanumeric string to a four-character code that is based on how the string sounds when spoken in English. The first character of the code is the first character of character_expression, converted to upper case. The second through fourth characters of the code are numbers that represent the letters in the expression. The letters A, E, I, O, U, H, W, and Y are ignored unless they are the first letter of the string. Zeroes are added at the end if necessary to produce a four-character code. For more information about the SOUNDEX code, see The Soundex Indexing System -- Bias based on left to right order of words SELECT 10 id, DIFFERENCE('Good', 'Good Samaritans') --Result is 4 (High score match) union all SELECT 11 id, DIFFERENCE('Good', 'Samaritans Good') --Result is 1 (Low score match) union all SELECT 20, DIFFERENCE('Samaritans', 'Good Samaritans') --Result is 1 (Low score match) union all SELECT 21, DIFFERENCE('Samaritans', 'Samaritans Good') --Result is 4 (High score match) union all SELECT 30, DIFFERENCE('Sam', 'Good Samaritans') --Result is 2 (On the upper low side) id (No column name) 10 4 11 1 20 1 21 4 30 2 fiddle Your expectations from difference may be too high.
76389016
76389173
I have the following dataframe: import pandas as pd pd.DataFrame({'index': {0: 'x0', 1: 'x1', 2: 'x2', 3: 'x3', 4: 'x4', 5: 'x5', 6: 'x6', 7: 'x7', 8: 'x8', 9: 'x9', 10: 'x10'}, 'distances_0': {0: 0.42394711275317537, 1: 0.40400179114038315, 2: 0.4077213959237454, 3: 0.3921048592156785, 4: 0.25293154279281627, 5: 0.2985576890173001, 6: 0.0, 7: 0.32563550923886675, 8: 0.33341592647322754, 9: 0.30653189426783256, 10: 0.31749957588191197}, 'distances_1': {0: 0.06684300576184829, 1: 0.04524728117549289, 2: 0.04896118088709522, 3: 0.03557204741075342, 4: 0.10588973399963886, 5: 0.06178330590643222, 6: 0.0001, 7: 0.6821440376099591, 8: 0.027074111335967314, 9: 0.6638424898747833, 10: 0.674718181953208}, 'distances_2': {0: 0.7373816871931514, 1: 0.7184619375104593, 2: 0.7225072199147892, 3: 0.7075191710741303, 4: 0.5679436864793461, 5: 0.6142446533143044, 6: 0.31652743219529056, 7: 0.010859948083988706, 8: 0.6475070638933254, 9: 0.010567926115431175, 10: 0.0027932480510772413}} ) index distances_0 distances_1 distances_2 0 x0 0.423947 0.066843 0.737382 1 x1 0.404002 0.045247 0.718462 2 x2 0.407721 0.048961 0.722507 3 x3 0.392105 0.035572 0.707519 4 x4 0.252932 0.105890 0.567944 5 x5 0.298558 0.061783 0.614245 6 x6 0.000000 0.000100 0.316527 7 x7 0.325636 0.682144 0.010860 8 x8 0.333416 0.027074 0.647507 9 x9 0.306532 0.663842 0.010568 10 x10 0.317500 0.674718 0.002793 I would like to get, for every distances_ column, the index with the minimum value. The requirement is that each distances_ column, should have a different index: For instance index=="x6" has the minimum value for both distances_0 and distances_1, columns, but it should be chosen only for one (and in this case it should be chosen for distances_0, since 0.000000 < 0.000100). How could I do that ?
How to get the index with the minimum value in a column avoiding duplicate selection
Use Series.idxmin with filter out existing values in ouput list: df1 = df.set_index('index') out = [] for c in df1.columns: out.append(df1.loc[~df1.index.isin(out), c].idxmin()) print (out) ['x6', 'x8', 'x10']
76381633
76387897
I have a pandas dataframe which is sorted by a date column. However I wish to ensure a minimum time interval between observations. Say for simplicity this window is 10 minutes, what this means is that if my first observation occurred at 8:05 AM then the second observation must occur at at least 8:15 AM. Any observations occurring between 8:05-8:15 AM must be dropped from the dataframe. Say without loss of generality that after dropping observations the second observation occurs at 8:17 AM. Then all observations between 8:17-8:27 AM are dropped to find the third data point and this process continues. I have a script which works but iterates over the rows one at a time and is excruciatingly slow as the dataframe has hundreds of thousands of rows. My current script (window is the minimum threshold in minutes): cur_time = df.iloc[0].Date for idx, row in df[1:].iterrows(): time_diff = (row.Date - cur_time).total_seconds() if time_diff > window*60: cur_time = row.Date else: trades_df.drop(idx, inplace=True) Can anyone think of a more speed optimized way of doing this operation? If I switch to the Date column as the index are there functions readily available for performing this function? Edit: After doing further research the function that I'm looking for is similar to df.resample(window + 'M').first(). However the issue with using this is that my data set is sparsely spaced. I.e. I don't have data for every minute, the gap between data points could be 1 second or it could be 1 month.
Ensuring a minimum time interval between successive observations in a Pandas dataframe
With your condition mentioned in comments, I think you can't vectorize whole code. However, you can browse through the dataset faster: window = 10 # convert date as numpy array (in seconds) arr = df['Date'].values.astype(float) / 1e9 # compute dense matrix using numpy broadcasting m = arr - arr[:, None] > window * 60 locs = [] # list of valid observations idx = 0 # first date is always valid while True: # append the current observation locs.append(idx) if m[idx].sum() == 0: # no more observations to check break # next valid observation idx = np.argmax(m[idx]) out = df.iloc[locs] Output: >>> out Date 0 2023-06-01 00:02:10 3 2023-06-01 00:14:20 8 2023-06-01 00:24:42 11 2023-06-01 00:35:35 13 2023-06-01 00:48:39 >>> locs [0, 3, 8, 11, 13] Minimal Reproducible Example: import numpy as np import pandas as pd np.random.seed(42) offsets = pd.to_timedelta(np.random.randint(0, 60*60, 20), unit='S') df = (pd.DataFrame({'Date': pd.Timestamp('2023-06-01') + offsets}) .sort_values('Date', ignore_index=True)) print(df) # Output Date 0 2023-06-01 00:02:10 # OK, first value is always valid 1 2023-06-01 00:05:30 2 2023-06-01 00:07:46 3 2023-06-01 00:14:20 # OK, 00:02:10 + 10min < 00:14:20 4 2023-06-01 00:18:15 5 2023-06-01 00:18:50 6 2023-06-01 00:20:38 7 2023-06-01 00:21:34 8 2023-06-01 00:24:42 # OK, 00:14:20 + 10min < 00:24:42 9 2023-06-01 00:27:18 10 2023-06-01 00:28:05 11 2023-06-01 00:35:35 # OK, 00:24:42 + 10min < 00:35:35 12 2023-06-01 00:36:09 13 2023-06-01 00:48:39 # OK, 00:35:35 + 10min < 00:48:39 14 2023-06-01 00:51:32 15 2023-06-01 00:52:51 16 2023-06-01 00:52:54 17 2023-06-01 00:56:20 18 2023-06-01 00:57:24 19 2023-06-01 00:58:27
76382810
76386312
I am reading some examples of SFINAE-based traits, but unable to make sense out of the one related to generic lambdas in C++17 (isvalid.hpp). I can understand that it roughly contains some major parts in order to implement a type trait such as isDefaultConstructible or hasFirst trait (isvalid1.cpp): 1. Helper functions using SFINAE technique: #include <type_traits> // helper: checking validity of f(args...) for F f and Args... args: template<typename F, typename... Args, typename = decltype(std::declval<F>()(std::declval<Args&&>()...))> std::true_type isValidImpl(void*); // fallback if helper SFINAE'd out: template<typename F, typename... Args> std::false_type isValidImpl(...); 2. Generic lambda to determine the validity: // define a lambda that takes a lambda f and returns whether calling f with args is valid inline constexpr auto isValid = [](auto f) { return [](auto&&... args) { return decltype(isValidImpl<decltype(f), decltype(args)&&... >(nullptr)){}; }; }; 3. Type helper template: // helper template to represent a type as a value template<typename T> struct TypeT { using Type = T; }; // helper to wrap a type as a value template<typename T> constexpr auto type = TypeT<T>{}; // helper to unwrap a wrapped type in unevaluated contexts template<typename T> T valueT(TypeT<T>); // no definition needed 4. Finally, compose them into isDefaultConstructible trait to check whether a type is default constructible: constexpr auto isDefaultConstructible = isValid([](auto x) -> decltype((void)decltype(valueT(x))()) { }); It is used like this (Live Demo): struct S { S() = delete; }; int main() { std::cout << std::boolalpha; std::cout << "int: " << isDefaultConstructible(type<int>) << std::endl; // true std::cout << "int&: " << isDefaultConstructible(type<int&>) << std::endl; // false std::cout << "S: " << isDefaultConstructible(type<S>) << std::endl; // false return 0; } However, some of the syntax are so complicated and I cannot figure out. My questions are: With respect to 1, as for std::declval<F>()(std::declval<Args&&>()...), does it mean that it is an F type functor taking Args type constructor? And why it uses forwarding reference Args&& instead of simply Args? With respect to 2, as for decltype(isValidImpl<decltype(f), decltype(args)&&...>(nullptr)){} , I also cannot understand why it passes forwarding reference decltype(args)&& instead of simply decltype(args)? With respect to 4, as for decltype((void)decltype(valueT(x))()), what is the purpose of (void) casting here? ((void) casting can also be found in isvalid1.cpp for hasFirst trait) All I can find about void casting is Casting to void to avoid use of overloaded user-defined Comma operator, but it seems it is not the case here. Thanks for any insights. P.S. For one who wants more detail could check C++ Templates: The Complete Guide, 2nd - 19.4.3 Using Generic Lambdas for SFINAE. The author also mentioned that some of the techniques are used widely in Boost.Hana, so I also listen to Louis Dionne's talk about it. Yet, it only helps me a little to understand the code snippet above. (It is still a great talk about the evolution of C++ metaprogramming)
Have difficulty understanding the syntax of generic lambdas for SFINAE-based traits
F is a function object callable with Args... For the sake of mental model, picture std::declval<F>() as a "fully constructed object of type F". std::declval is there just in case F is not default-constructible and still needs to be used in unevaluated contexts. For a default-constructible type this would be equivalent: F()(std::declval<Args&&>()...); In essence it's a call to F's constructor and then call to its operator() with forwarded Args. But imagine one type is constructible with int, another one is default-constructible, yet another one requires a string. Without some unevaluated constructor-like metafunction it would be impossible to cover all those cases. You can read more on that in Alexandrescu's Modern C++ Design: Generic Programming and Design Patterns Applied. Adding && to the argument type is effectively perfect-forwarding it. It may look obscure, but it's just a shorthand for decltype(std::forward<decltype(args)>(args)). See the implementation of std::forward and reference collapsing rules for more details. Keep in mind though, that this snippet adds rvalue-reference that collapses to the correct one when combined with the original type, not a forwarding one. As it was stated in the comments: the type is not really needed, possibilty exists it cannot be returned, its presence there is just to check expression's correctness, afterwards it can be discarded.
76383773
76386482
Hello I just tring send to bottom of its parent: <form class="flex flex-col w-full" (submit)="updatePhoto(title, description)"> <div class="w-full block"> <input type="text" class="shadow appearance-none border rounded w-full py-2 px-3 text-gray-700 leading-tight focus:outline-none focus:shadow-outline" placeholder="Photo's Title" [value]="photo.title" #title> </div> <div class="my-4 w-full"> <textarea rows="2" class="shadow appearance-none border rounded w-full py-2 px-3 text-gray-700 leading-tight focus:outline-none focus:shadow-outline resize-none" placeholder="Photo's Description" [value]="photo.description" #description></textarea> </div> <div class="grid justify-items-end mt-auto border "> <div> <button class="text-white bg-gradient-to-r from-red-400 via-red-500 to-red-600 hover:bg-gradient-to-br focus:ring-4 focus:outline-none focus:ring-red-300 dark:focus:ring-red-800 shadow-lg shadow-red-500/50 dark:shadow-lg dark:shadow-red-800/80 font-medium rounded-lg text-sm px-5 py-2.5 text-center mr-2 mb-2" (click)="deletePhoto(photo._id)"> Delete </button> <button class="text-white bg-gradient-to-r from-blue-500 via-blue-600 to-blue-700 hover:bg-gradient-to-br focus:ring-4 focus:outline-none focus:ring-blue-300 dark:focus:ring-blue-800 shadow-lg shadow-blue-500/50 dark:shadow-lg dark:shadow-blue-800/80 font-medium rounded-lg text-sm px-5 py-2.5 text-center mr-2 mb-2 "> Update </button> </div> </div> </form> so I want align element(buttons) to the bottom of its parent, I'm using flex flex-col in parent and child I'm using grid justify-items-end mt-auto but don't work, so I just added border to see the position, I'm getting this: you can see the buttons are up, what's wrong? why
Taildwind css align bottom
It works fine just use grid on the outer container. There's also a lot of opportunity to streamline both the markup and the classes. <script src="https://cdn.tailwindcss.com"></script> <div class="m-4 grid max-w-4xl grid-cols-2 gap-3 rounded border p-4 shadow"> <img class="w-full" src="https://picsum.photos/id/237/900" /> <form class="flex flex-col" (submit)="updatePhoto(title, description)"> <input type="text" class="focus:shadow-outline mb-4 w-full appearance-none rounded border px-3 py-2 leading-tight text-gray-700 shadow focus:outline-none" placeholder="Photo's Title" [value]="photo.title" #title /> <textarea rows="2" class="focus:shadow-outline w-full resize-none appearance-none rounded border px-3 py-2 leading-tight text-gray-700 shadow focus:outline-none" placeholder="Photo's Description" [value]="photo.description" #description></textarea> <div class="ml-auto mt-auto"> <button class="mb-2 mr-2 rounded-lg bg-gradient-to-r from-red-400 via-red-500 to-red-600 px-5 py-2.5 text-center text-sm font-medium text-white shadow-lg shadow-red-500/50 hover:bg-gradient-to-br focus:outline-none focus:ring-4 focus:ring-red-300 dark:shadow-lg dark:shadow-red-800/80 dark:focus:ring-red-800" (click)="deletePhoto(photo._id)">Delete</button> <button class="mb-2 mr-2 rounded-lg bg-gradient-to-r from-blue-500 via-blue-600 to-blue-700 px-5 py-2.5 text-center text-sm font-medium text-white shadow-lg shadow-blue-500/50 hover:bg-gradient-to-br focus:outline-none focus:ring-4 focus:ring-blue-300 dark:shadow-lg dark:shadow-blue-800/80 dark:focus:ring-blue-800">Update</button> </div> </form> </div>
76389093
76389203
According to the chrome console there are no problems however there are as my variable is "undefined" despite it being set as "0" I have used the following resources: Resource 1 Resource 2 I am trying to make it so when the user clicks my image it will add +1 to the points variable (Game.points) here is my code (for context purposes): <!DOCTYPE html> <script> function Game() { var points = "0" document.getElementById("myText").innerHTML = Game.points; } </script> <center> <img onclick="Game.points = Game.points + 1" src="https://static.wikia.nocookie.net/villains/images/5/5d/Frieza.png/revision/latest?cb=20200625063534" width="350px"> </center> <body onload="Game()"> <h1>"The value for number is: " <span id="myText"></span></h1>
How do I display Javascript variable using
The points variable is not a member of the Game object. To reference it, just use the points keyword. There's also a couple of changes you can make to improve the code quality: Remove the inline onclick and onload attributes. They are outdated and not good practice. Use addEventListener() to bind events within JS, instead of HTML. Set points to be a numeric value, not a string. This way there's no type coercion needed when you increment its value. Put any CSS styling in a separate stylesheet, not the HTML const myText = document.querySelector('#myText'); const img = document.querySelector('img'); let points = 0; document.addEventListener('DOMContentLoaded', () => { updateText(); }); img.addEventListener('click', () => { points++; updateText(); }); updateText = () => { myText.textContent = points; }; img { width: 350px; } <center> <img src="https://static.wikia.nocookie.net/villains/images/5/5d/Frieza.png/revision/latest?cb=20200625063534" /> </center> <h1> The value for number is: <span id="myText"></span> </h1>
76382922
76386636
I have very CPU heavy process and would like to use as many workers are possible in Dask. When I read the csv file using the read_csv from dask and then process the dataframe using map_partitions only one worker is used. If I use read_csv from pandas and then convert the file to a Dask dataframe, all my workers are used. See code below. Could someone explain the difference in behavior? Ideally, I would like to use read_csv from Dask so that I dont have to have a conversion step. Could anyone help me with that? import dask as d import pandas as pd def fWrapper(x): p = doSomething(x.ADDRESS, param) return(pd.DataFrame(p, columns=["ADDRESS", "DATA","TOKEN", "CLASS"])) # only use 1 worker instead of the available 8 dask_df = d.dataframe('path\to\file') dask_df.set_index(UID, npartitions = 8, drop = False) ddf2 = dask_df.map_partitions(fWrapper, meta={"ADDRESS" : object, "DATA" : object, "TOKEN" : object, "CLASS" : object}).compute() #uses all 8 workers df = pd.read_csv('path\to\file') df.set_index('UID', drop=False) dask_df2 =d.dataframe.from_pandas(df, npartitions=dask_params['df_npartitions'], sort=True) ddf3 = dask_df2.map_partitions(fWrapper, meta={"ADDRESS" : object, "DATA" : object, "TOKEN" : object, "CLASS" : object}).compute()
Dask map_partition does no use all workers on client
The DataFrame.set_index method in both dask.dataframe and pandas returns the updated dataframe, so it must be assigned to a label. pandas does have a convenience kwarg inplace, but that's not available in dask. This means that in your snippet, the first approach should look like this: dask_df = dask_df.set_index(UID, npartitions = 8, drop = False) This will make sure that the new indexed dask dataframe has 8 partitions, so downstream work should be allocated across multiple workers.
76378627
76387024
I made a program where users can keep track of their expenses by entering them into the system after creating their own account. The system requires users to fill in four fields: customer, category, price, and month. I would like the first field (customer) to automatically populate with the username of the logged-in user, so that users don't have to choose from all available customers. However, I encountered an issue where users are unable to create objects in the system. I can only create objects through the admin dashboard. When I try to create an object on the page, it throws an error message saying, RelatedObjectDoesNotExist at / User has no customer. I suspect this problem is related to the fact that users are added in the authentication and authorization section under Users in the admin page, instead of being created under Customers by my app, alongside the Finance section where the objects are stored. To summarize my two main issues: I want the first field to automatically populate with the username of the logged-in user. I want users to be able to create objects directly from the page, as they could in the past when this issue didn't occur. Thank you so much for your help. models.py: class Customer(models.Model): user = models.OneToOneField(User, null=True, on_delete=models.CASCADE) name = models.CharField(max_length=200, null=True) email = models.CharField(max_length=200, null=True, blank=True) date_created = models.DateTimeField(auto_now_add=True, null=True) # def __str__(self): # return self.name #I don't now if it's correct class Finance(models.Model): expenses_category = [ ("Saving", "Saving"), ("Food", "Food"), ("Bills", "Bills"), ("Rent", "Rent"), ("Extra", "Extra"), ] expenses_month = [ ("January", "January"), ("February", "February"), ("March", "March"), ("April", "April"), ("May", "May"), ("June", "June"), ("July", "July"), ("August", "August"), ("September", "September"), ("October", "October"), ("November", "November"), ("December", "December"), ] customer = models.ForeignKey(User, on_delete=models.CASCADE, null=True, blank=True) category = models.CharField(choices=expenses_category, max_length=200) price = models.IntegerField() month = models.CharField(choices=expenses_month, max_length=200) views.py: @csrf_exempt def registerPage(request): if request.user.is_authenticated: return redirect('home') else: form = CreateUserForm() if request.method == 'POST': form = CreateUserForm(request.POST) if form.is_valid(): # form.instance.user = request.user user = form.save() username = form.cleaned_data.get('username') group = Group.objects.get(name='customer') user.groups.add(group) messages.success(request, 'Account was created for ' + username) return redirect('login') context = {'form': form} return render(request, 'app_finance/register.html', context) def loginPage(request): username = None if request.user.is_authenticated: username = request.user.customer return redirect('home') else: if request.method == 'POST': username = request.POST.get('username') password = request.POST.get('password') user = authenticate(request, username=username, password=password) if user is not None: login(request, user) return redirect('home') else: messages.info(request, 'Username or password incorrect.') context = {} return render(request, 'app_finance/login.html', context) def logoutUser(request): logout(request) return redirect('login') def userPage(request): return render(request, 'app_finance/user.html') @login_required(login_url='login') def homeView(request): # customer = Customer.objects.get(id=pk) (not sure#) username = None items = Finance.objects.filter(customer_id=request.user.id) form = FinanceForm(initial={'customer': User}) if request.method == 'POST': username = request.user.customer form = FinanceForm(request.POST) # initial={'customer': user} if form.is_valid(): form.save() return HttpResponseRedirect('/') else: form = FinanceForm() return render(request, 'app_finance/home.html', {'form': form, 'items': items}) forms.py: class CustomerForm(ModelForm): class Meta: model = Customer fields = '__all__' exclude = ['user'] class CreateUserForm(UserCreationForm): class Meta: model = User fields = ['username', 'email', 'password1', 'password2'] class FinanceForm(ModelForm): class Meta: model = Finance fields = '__all__' templates/home.html: <div> <span>Hello, {{ request.user }}</span> <br> <span><a class="hello-msg" href="{% url 'logout' %}">Logout</a></span> </div> <form action="" method="post"> {% csrf_token %} {{ form }} <!-- {{ form }} --> <input type="submit" value="Submit"> </form> <br> <div class="row"> <div class="col-md"> <div class="card card-body"> <h1>Expenses</h1> </div> <div class="card card-body"> <table class="table"> <tr> <th>User</th> <th>Category</th> <th>Price</th> <th>Month</th> </tr> {% for i in items %} <tr> <td>{{ i.customer }}</td> <td>{{ i.category }}</td> <td>{{ i.price }}</td> <td>{{ i.month }}</td> <td>&nbsp;&nbsp;</td> <td><a class="btn btn-sm btn-info" href="">Update</a></td> <td><a class="btn btn-sm btn-danger" href="">Delete</a></td> </tr> {% endfor %} </table> </div> </div> </div> Thanks again for helping.
User can't create objects in the page
This is the solution i found and worked perfect too. I made 2 changes: views.py: @login_required(login_url='login') def homeView(request): items = Finance.objects.filter(customer=request.user).order_by(Cast('month', IntegerField())).reverse() if request.method == 'POST': form = FinanceForm(request.POST, request=request) # Pass the request object to the form if form.is_valid(): finance_obj = form.save(commit=False) finance_obj.customer = request.user finance_obj.save() return redirect('home') else: form = FinanceForm(request=request) # Pass the request object to the form context = {'form': form, 'items': items} return render(request, 'app_finance/home.html', context) forms.py class FinanceForm(ModelForm): def __init__(self, *args, **kwargs): self.request = kwargs.pop('request') # Retrieve the request object super().__init__(*args, **kwargs) self.fields['customer'].initial = self.request.user
76387813
76387951
I have this working code to process a single file: import pandas as pd import pygmt #import table df = pd.read_table("file1.txt", sep=" ", names=['X', 'Y', 'Z'] ) #min/max Xmin = df['X'].min() Xmax = df['X'].max() Ymin = df['Y'].min() Ymax = df['Y'].max() #print(Xmin, Xmax) #print(Ymin, Ymax) #gridding with pyGMT grid = pygmt.surface(data=df, spacing=1, region=[Xmin, Xmax, Ymin, Ymax]) #print(grid) #export grid.to_netcdf('file1.nc') Now I want to repeat this code for all *.txt files in a directory. How can I do that? I tried writing a loop like: for file in glob.glob("*.txt"): But how can I make the respective input (.txt) and output (.nc) have the same name?
Find Min/max for X and Y, then interpolate for 5000 txt files with python
As said before in comment section you also can do it by iterating all .txt filenames and changes their formats to .nc with saving names. import glob import pandas as pd import pygmt filenames_in = glob.glob("*.txt") for filename_in in filenames_in: filename_out = filename_in.replace('.txt', '.nc') # YOUR CODE # import table df = pd.read_table(filename_in, sep=" ", names=['X', 'Y', 'Z']) # min/max Xmin = df['X'].min() Xmax = df['X'].max() Ymin = df['Y'].min() Ymax = df['Y'].max() # print(Xmin, Xmax) # print(Ymin, Ymax) # gridding with pyGMT grid = pygmt.surface(data=df, spacing=1, region=[Xmin, Xmax, Ymin, Ymax]) # print(grid) # export grid.to_netcdf(filename_out)
76383379
76387122
I am learning x86-64 assembly with an online course and the course is horribly unclear in detail. I've searched online and read several SO questions but couldn't get an answer. I tried to figure out how to calculate binary multiplication by hand, but I am stuck with imul . Given this example of binary multiplication, 11111111 * 00000011, it can be viewed as 255 * 3 unsigned or -1 * 3 signed. 255*3 mov al, 255 mov bl, 3 mul bl this is easy and here's how I calculate by hand, just like decimal multiplication: 11111111 x 00000011 -------------- 11111111 11111111 -------------- 1011111101 The result overflows, the upper half is 10 in ah and the lower half is 11111101 in al. My manual calculation matches the program result. -1*3 when it comes to signed, mov al, -1 mov bl, 3 imul bl the program result is 11111111 in ah and 11111101 in al. How can I calculate this result by hand? I was told that sign extension is involved in imul, but I really don't know how it works here. I am using SASM IDE and NASM Assembler.
How do you manually calculate imul -1 * 3?
Honestly, I can't fully understand the other two answers. It's over complicated for me. I just need a dumb, simple and universal rule. I'd like to just pick up what works for me. The result is equivalent to sign-extending (imul) or zero-extending (mul) both inputs to the destination width and then doing a non-widening multiply @Peter Cordes To manually compute there are several approaches: you can sign extend both inputs to 16-bits and do 16 Γ— 16 keeping only the low 16-bits @Erik Eidt I tried and verified and this rule works for me. -1*3, sign extended 11111111 11111111 x00000000 00000011 ------------------- 11111111 11111111 111111111 1111111 ------------------- 1011111111 11111101 keep the low 16 bits, I get the correct result 11111111(ah) 11111101(al). Try a 4 bit example: -2 * 3, 1110 imul 0011 sign extended 1111 1110 x0000 0011 ----------- 1111 1110 11111 110 ----------- 101111 1010 keep the lower 8 bits, the result is 1111 1010, -6. Before I wasn't sure how sign extended works in imul, now I get it and it's easy to verify. Btw, if you find manual calculation tedious for some examples (e.g. 4 bit -7 * -1, 1001 x 1111, sign extended 1111 1001 x 1111 1111, many lines to add), you can use the Windows Calculator (programmer mode) and verify the result very quickly.
76388975
76389211
The circular dependency between self.buttons and self.radiobuttons has been impossible for me to solve class Program(tk.Tk): def __init__(self, title, size): super().__init__() self.title(title) self.geometry(f"{size[0]}x{size[1]}") self.frame = tk.Frame(self, background = "#D3D3D3") self.frame.pack(expand = 1, fill = tk.BOTH) self.entries = Entries(self) self.output = Output(self) self.buttons = Buttons(self, self.entries, self.output, self.radiobuttons) # here self.radiobuttons = (self, self.buttons) # here self.filemenu = FileMenu(self) self.mainloop() I am trying to use functools among other things to fix my problem but I can't seem to fix it. I could always just move the entirety of one class into the other but it would make using classes redundant
How do I eliminate this circular dependency
You can create another class method of Buttons to pass self.radiobuttons to it after self.radiobuttons is created. Below is an example: class Buttons(ttk.Frame): def __init__(self, master, entries, output): super().__init__(master) self.entries = entries self.output = output # another class method to pass radiobuttons def set_radiobuttons(self, radiobuttons): self.radiobuttons = radiobuttons # do whatever you want on radiobuttons class Radiobuttons(ttk.Frame): def __init__(self, master, buttons): super().__init__(master) self.buttons = buttons class Program(tk.Tk): def __init__(self, title, size): super().__init__() self.title(title) self.geometry(f"{size[0]}x{size[1]}") self.frame = tk.Frame(self, background = "#D3D3D3") self.frame.pack(expand = 1, fill = tk.BOTH) self.entries = Entries(self) self.output = Output(self) self.buttons = Buttons(self, self.entries, self.output) self.radiobuttons = Radiobuttons(self, self.buttons) self.buttons.set_radiobuttons(self.radiobuttons) # pass self.radiobuttons to class Buttons self.filemenu = FileMenu(self) if __name__ == "__main__": app = Program("Hello World", (600,400)) app.mainloop() # call mainloop() here instead of inside __init__()
76387859
76387955
I faced an issue when I had to terraform import some role_assignment resources, specially regarding the APP CONFIG DATA READER role assignment, in terraform. the problem I had to solve was due to an evolution of our iac to make the terraform plan more readable and explicit. Here is the code for role assignment that changed : module "my_role_assignment_slot_to_app_config" { for_each = local.map_slot_app_config source = "../../resourceModules/role_assignment" scope = each.value.config_id role_definition_names = ["Reader","App Configuration Data Reader"] principal_id = each.value.slot_guid } with the following module for role assignments : ```hcl resource "azurerm_role_assignment" "my_role_assignment" { for_each = toset(var.role_definition_names) scope = var.scope role_definition_name = each.value principal_id = var.principal_id } This code would plan the following, more readable : But as you can see, the index for the azurerm_role_assignment.my_role_assignment contains spaces. This was preventing us to terraform import the role assignment (as it has been created manually before the iac was coded) using powershell script in an azureCli task on azure Devops : - task: AzureCli@2 displayName: Runs tfImport.ps1 condition: and(succeeded(), eq(variables['toUpdate.scriptExists'], 'true')) # test script presence name: tfImport.ps1 inputs: azureSubscription: ${{ parameters.serviceConnection }} scriptType: ps scriptLocation: InlineScript inlineScript: | $Env:ARM_CLIENT_ID = "$Env:servicePrincipalId" $Env:ARM_CLIENT_SECRET = "$Env:servicePrincipalKey" $Env:ARM_TENANT_ID = "$Env:tenantId" $Env:ARM_SUBSCRIPTION_ID = az account list --query "[?isDefault].id" -o tsv $Env:TF_LOG = "${{ parameters.TF_LOG }}" terraform init ` -migrate-state ` -backend-config="resource_group_name=${{ parameters.storageAccountResourceGroup }}"` -backend-config="storage_account_name=${{ parameters.storageAccount}}" ` -backend-config="key=${{ parameters.storageContainer }}" ` -backend-config="container_name=${{ parameters.stateBlobContainer }}" // runs my tfImport.ps1 script here ./tfImport.ps1 workingDirectory: $(pipeline_artefact_folder_extract)/ addSpnToEnvironment: true failOnStderr: false continueOnError: true The script I used had the following terraform import line, terraform import 'module.my_role_assignment_slot_to_app_config[\"sit03_z-adf-ftnd-shrd-npd-ew1-cfg01\"].azurerm_role_assignment.my_role_assignment[\"App Configuration Data Reader\"]' /subscriptions/**********/resourceGroups/*********/providers/Microsoft.AppConfiguration/configurationStores/z-adf-ftnd-shrd-npd-ew1-cfg01/providers/Microsoft.Authorization/roleAssignments/<role_assignment_id> and so I've had the following error : (id datas removed) After some researches, I've found the following link that explains the why and give to me one start of solution : https://github.com/hashicorp/terraform/issues/25116 But I had to go further and find a way to use my powershell script without the startProcess method. And as I had also to get my role_assignment resourceId from its PrincipalId (as we can gets the PrincipalId of resources that have the 'App Configuration Data Reader' role on the app_config using the following) # role assignment over a specific scope (such as app_config) $rsRolesCfg = az role assignment list --scope /subscriptions/******/resourceGroups/*******/providers/Microsoft.AppConfiguration/configurationStores/******-cfg01 | ConvertFrom-Json $myRole = $rsRolesCfg | Where-Object roleDefinitionName -eq 'App Configuration Data Reader' | Where-Object id -like "*<app_config_resourceId>" ## principalId is the id of the object that get the role over the scope ! # GetsresourceId from PrincipalId / ObjetcId (without the '<>' on body, off course :) ) : $resourceId = (az rest --method POST --url 'https://graph.microsoft.com/v1.0/directoryObjects/getByIds' --headers 'Content-Type=application/json' --body '{\"ids\":[\"<PrincipalId>\"]}' | ConvertFrom-Json | Select-Object value).value.alternativeNames[1] Solution from How to Get Azure AD Object by Object ID Using Azure CLI (thanks a lot !) I had to test it locally in powershell terminal... So it did not work as expected. So then, I change a little bit the script and got the solution (next post, as solution from my problem).
Maps containing keys with spaces are not properly validated when those keys are used as resource names
So I've had to test my get_ResourceId methods in local on a powershell VSCode terminal, that does not accept the above code (as the paces wher badly interpreted by powershell) So, after a quick search that explain to me that the "`" was the escape character for Powershell, I've tested this that works and give to me the expected resourceId for role_assignment : $rsRolesCfg = az role assignment list --scope /subscriptions/************/resourceGroups/******/providers/Microsoft.AppConfiguration/configurationStores/<app_configuration_name> | ConvertFrom-Json ($rsRolesCfg | Where-Object roleDefinitionName -eq 'App Configuration Data Reader') | ForEach-Object {$local=$_.principalId; (az rest --method POST --url 'https://graph.microsoft.com/v1.0/directoryObjects/getByIds' --headers 'Content-Type=application/json' --body "{\`"ids\`":[\`"$local\`"]}" | ConvertFrom-Json | Select-Object value).value.alternativeNames[1] } So, the use of "`" was the solution for my point, so I've tried to use it on my terraform import script (on first post) and it works fine also : terraform import "module.my_role_assignment_slot_to_app_config[\`"sit03_z-adf-ftnd-shrd-npd-ew1-cfg01\`"].azurerm_role_assignment.my_role_assignment[\`"App Configuration Data Reader\`"]" /subscriptions/*****/resourceGroups/******/providers/Microsoft.AppConfiguration/configurationStores/z-adf-ftnd-shrd-npd-ew1-cfg01/providers/Microsoft.Authorization/roleAssignments/<role_assignment_id> But I had also to change the yaml task, using pscore rather than ps, like the following : - task: AzureCli@2 displayName: Runs tfImport.ps1 condition: and(succeeded(), eq(variables['toUpdate.scriptExists'], 'true')) # test script presence name: tfImport.ps1 inputs: azureSubscription: ${{ parameters.serviceConnection }} scriptType: **pscore** So then, the terrform import script was running with success ! I've used the same "title" for this stackOverflow question / solution as the one in the GitHub, so them peoples who are lokking for that solution could find easily the solution with the question... At least, I hope so :-P thanks for reading !
76378706
76387141
In C++, I can't find a way to appropriately terminate a WMI session with a remote server. Any attempt to release the IWbemServices pointer throws an exception; the TCP connection to the server remains established until the process exits (it's open after the last CoUninitialize call). This problem (thrown exception) does not occur when connecting to the local machine. I've looked at a similar question asked here, but the solution from Microsoft (retrieving the IUnknown pointer and releasing it first) didn't solve the issue. Here's the code (error checking has been omitted for readability): HRESULT hRes = S_OK; IWbemLocator* pWbemLocator = NULL; IWbemServices* pWbemServices = NULL; // these three pointers are already initialized... PWCHAR wcUser; // L"theuser" PWCHAR wcPass; // L"thepassword" PWCHAR wcAuth; // L"ntlmdomain:THEDOMAIN" std::wstring wstrNsPath = L"\\\\remoteserver\\ROOT\\CIMV2"; hRes = CoInitializeEx(NULL, COINIT_MULTITHREADED); hRes = CoCreateInstance(CLSID_WbemLocator, NULL, CLSCTX_INPROC_SERVER, IID_IWbemLocator, (LPVOID*)&pWbemLocator); // this returns S_OK :) hRes = pWbemLocator->ConnectServer((BSTR)wstrNsPath.c_str(), (BSTR)wcUser, (BSTR)wcPass, NULL, WBEM_FLAG_CONNECT_USE_MAX_WAIT, (BSTR)wcAuth, NULL, &pWbemServices); hRes = CoSetProxyBlanket(pWbemServices, RPC_C_AUTHN_DEFAULT, RPC_C_AUTHZ_DEFAULT, COLE_DEFAULT_PRINCIPAL, RPC_C_IMP_LEVEL_IMPERSONATE, RPC_C_AUTHN_LEVEL_DEFAULT, NULL, // domain info already specified in 'wcAuth' EOAC_NONE); // do some queries.. // cleanup pWbemServices->Release(); // on remote sessions, this throws an exception pWbemLocator->Release(); CoUninitialize(); The exception is displayed in the debug output (Visual Studio): onecore\com\combase\dcomrem\call.cxx(1234)\combase.dll!0000ABCDEFABCDEF: (caller: 0000FEDCBAFEDCBA) ReturnHr(1) tid(4321) 80070005 Access is denied. Is this expected behavior? Should the connection between the client and server not be terminated once the session is released? I attempted to follow MSDN's advice and added the following after the CoSetProxyBlanket() call in the code above. It didn't change anything. IUnknown* pUnknown = NULL; pWbemServices->QueryInterface(IID_IUnknown, (LPVOID*)&pUnknown); if (pUnknown) { hRes = CoSetProxyBlanket(pUnknown, RPC_C_AUTHN_DEFAULT, RPC_C_AUTHZ_DEFAULT, COLE_DEFAULT_PRINCIPAL, RPC_C_IMP_LEVEL_IMPERSONATE, RPC_C_AUTHN_LEVEL_DEFAULT, NULL, EOAC_NONE); pUnknown->Release(); } Any advice is greatly appreciated! EDIT So after capturing session packets, it would appear that setting the proxy security with pAuthInfo == NULL causes the request to be made by the current logged on user of the client machine. It ignores the credentials that I provided when calling ConnectServer. I'm aware that the COAUTHIDENTITY structure allows you to pass the correct credentials to CoSetProxyBlanket, but I'd like to avoid having to input the domain as a separate variable. In other words, is there a way that this information can be extracted using the wcAuth when the request is made to a remote server? If so, how could I distinguish local vs. remote requests? Here is the output from Wireshark that led me to believe this is the problem (see packet 1617): No. Time Source Destination Protocol Length Info 1615 162.221354 [CLIENT_IP] [SERVER_IP] DCERPC 174 Alter_context: call_id: 8, Fragment: Single, 1 context items: IRemUnknown2 V0.0 (32bit NDR), NTLMSSP_NEGOTIATE 1616 162.228517 [SERVER_IP] [CLIENT_IP] DCERPC 366 Alter_context_resp: call_id: 8, Fragment: Single, max_xmit: 5840 max_recv: 5840, 1 results: Acceptance, NTLMSSP_CHALLENGE 1617 162.229396 [CLIENT_IP] [SERVER_IP] DCERPC 612 AUTH3: call_id: 8, Fragment: Single, NTLMSSP_AUTH, User: .\[client_user] 1618 162.229495 [CLIENT_IP] [SERVER_IP] IRemUnknown2 182 RemRelease request Cnt=1 Refs=5-0 1619 162.235567 [SERVER_IP] [CLIENT_IP] TCP 60 49669 β†’ 59905 [ACK] Seq=1606 Ack=4339 Win=64768 Len=0 1620 162.235567 [SERVER_IP] [CLIENT_IP] DCERPC 86 Fault: call_id: 8, Fragment: Single, Ctx: 0, status: nca_s_fault_access_denied
(WMI) IWbemServices::Release() throws "Access Denied" exception when connected to remote machine
I was able to resolve the issue. If you are not using the current logged-on user token, you must specify the pAuthInfo parameter of CoSetProxyBlanket to a valid COAUTHIDENITY structure pointer. The docs for IWbemLocator::ConnectServer actually state that it's best practice to include the domain in the strUser parameter... and that if you do so, you must pass the authority string as NULL. One thing to note is that if ConnectServer succeeds, you don't have to go crazy with the sanitizing of the username string for correctness; the login either worked or it didn't (and breaks/throws an exception, depending on how you handle errors). In other words, just search the string for the domain delimiters ('\\' or '@') and split them appropriately into domain name and username.
76383760
76387258
I've been going through react-router's tutorial, and I've been following it to the letter as far as I'm aware. I'm having some issues with the url params in loaders segment. The static contact code looks like this export default function Contact() { const contact = { first: "Your", last: "Name", avatar: "https://placekitten.com/g/200/200", twitter: "your_handle", notes: "Some notes", favorite: true, } And when it loads, it looks like this. That works just fine, however, the tutorial then tells me to change that code so that I use data that's loaded in instead. The code now looks like this import { Form, useLoaderData } from "react-router-dom"; import { getContact } from "../contacts" export async function loader({ params }) { const contact = await getContact(params.contactid); return {contact} } export default function Contact() { const { contact } = useLoaderData(); According to the tutorial, it should just load in an empty contact that looks like this but instead, every time I try to open one of the new contacts, it kicks up an error saying React Router caught the following error during render TypeError: contact is null The actual line of code this error points to is in the return segment of the contact component, which looks like this return ( <div id="contact"> <div> <img key={contact.avatar} src={contact.avatar || null} /> </div> <div> <h1> {contact.first || contact.last ? ( <> {contact.first} {contact.last} </> ) : ( <i>No Name</i> )}{" "} <Favorite contact={contact} /> </h1> {contact.twitter && ( <p> <a target="_blank" href={`https://twitter.com/${contact.twitter}`} > {contact.twitter} </a> </p> )} {contact.notes && <p>{contact.notes}</p>} <div> <Form action="edit"> <button type="submit">Edit</button> </Form> <Form method="post" action="destroy" onSubmit={(event) => { if ( !confirm( "Please confirm you want to delete this record." ) ) { event.preventDefault(); } }} > <button type="submit">Delete</button> </Form> </div> </div> </div> ); } Pretty much anywhere contacts is called gets an error. So, anyone have any idea what I'm doing wrong here? To my knowledge, I've been following their guide to the letter and it seems like it should be able to handle contacts not having any data, but it's not. These are the pieces of my code that are supposed to be working together to render a contact, or at least the pertinent parts The router, this is the main file, the only part missing is the part where it's rendered import * as React from "react"; import * as ReactDOM from "react-dom/client"; import { createBrowserRouter, RouterProvider, } from "react-router-dom"; import "./index.css"; import Root, { loader as rootLoader, action as rootAction } from "./routes/root"; import ErrorPage from "./error-page"; import Contact, { loader as contactLoader } from "./routes/contact" const router = createBrowserRouter([ { path: "/", element: <Root />, errorElement: <ErrorPage />, loader: rootLoader, action: rootAction, children: [ { path: "contacts/:contactID", element: <Contact />, loader: contactLoader } ] } These are the functions in the root file that are called when a new contact is made and when it needs to be displayed import { Outlet, Link, useLoaderData, Form } from "react-router-dom" import { getContacts, createContact } from "../contacts" export async function action() { const contact = await createContact(); console.log("Contact made") return {contact} } export async function loader(){ const contacts = await getContacts(); return {contacts}; } This is the createContacts function that gets called when a contact is created, and this is the getContacts function export async function createContact() { await fakeNetwork(); let id = Math.random().toString(36).substring(2, 9); let contact = { id, createdAt: Date.now() }; let contacts = await getContacts(); contacts.unshift(contact); await set(contacts); return contact; } export async function getContact(id) { await fakeNetwork(`contact:${id}`); let contacts = await localforage.getItem("contacts"); let contact = contacts.find(contact => contact.id === id); return contact ?? null; } This is the contacts.jsx file where things are currently going wrong. When a new contact is made, it's going to be empty, which I imagine is the source of the problem, but there are checks here to deal with that, or at least there are supposed to be. import { Form, useLoaderData } from "react-router-dom"; import { getContact } from "../contacts" export async function loader({ params }) { const contact = await getContact(params.contactid); return { contact } } export default function Contact() { const { contact } = useLoaderData(); return ( <div id="contact"> <div> <img // these next two lines are where the errors typically start, // although it seems to extend down to any instance where contact // gets called. key={contact.avatar} src={contact.avatar || null} /> </div> <div> <h1> {contact.first || contact.last ? ( <> {contact.first} {contact.last} </> ) : ( <i>No Name</i> )}{" "} <Favorite contact={contact} /> </h1> {contact.twitter && ( <p> <a target="_blank" href={`https://twitter.com/${contact.twitter}`} > {contact.twitter} </a> </p> )} {contact.notes && <p>{contact.notes}</p>} <div> <Form action="edit"> <button type="submit">Edit</button> </Form> <Form method="post" action="destroy" onSubmit={(event) => { if ( !confirm( "Please confirm you want to delete this record." ) ) { event.preventDefault(); } }} > <button type="submit">Delete</button> </Form> </div> </div> </div> ); }
Going through the react-router tutorial and rather than using default or null values when loading an empty object, it kicks up an error
There are some subtle, but detrimental, casing issues in the route path params. The Contacts component's route path param is declared as contactID. const router = createBrowserRouter([ { path: "/", element: <Root />, errorElement: <ErrorPage />, loader: rootLoader, action: rootAction, children: [ { path: "contacts/:contactID", // <-- "contactID" element: <Contact />, loader: contactLoader, }, ], }, ]); The contact loader is referencing a contactid path parameter. export async function loader({ params }) { const contact = await getContact(params.contactid); // <-- "contactid" return { contact }; } As such, the loader function is unable to find a match and returns null to the Contact component. An error is thrown in the UI when attempting to access properties of the null reference. Any valid Javascript identifier will work as the name of the route path parameter, but they should all be in agreement. Casing matters in variable names in Javascript. The common convention in variable names is to use camelCasing, e.g. contactId. const router = createBrowserRouter([ { path: "/", element: <Root />, errorElement: <ErrorPage />, loader: rootLoader, action: rootAction, children: [ { path: "contacts/:contactId", element: <Contact />, loader: contactLoader, }, ], }, ]); export async function loader({ params }) { const contact = await getContact(params.contactId); return { contact }; }
76387927
76387999
I would like to display either one of two images in a modal based on an API response...unfortunately, the API returns a long string array. I need to be able to determine if the word "Congratulations" is in this array. So far, I've tried a couple basic things: Here is an example API response: { id: 12, message: ["1 bonus point!", "Congratulations", "You have leveled up!"] } console.log(response) // the full response prints to the console, no problem console.log(response?.message) //undefined console.log(reponse.message) //undefined console.log(response["message"]) //undefined I want to be able to do something like this: setSuccess(response["message"].contains("Congratulations")) I'm sure it will be some small syntax thing, but I'm been banging my head against the wall. Any help is appreciated, let me know what I should try!
Checking the values of JSON response in React Native
I would join the array and than call include. const data = { id: 12, message: ["1 bonus point!", "Congratulations", "You have leveled up!"] } const containTerm = data.json().message.join().includes('Congratulations') console.log('Is term in array', containTerm)
76383873
76389257
I have a DataFrame consisting of the following columns: VP-ID, MotivA_MotivatorA_InnerDriverA_PR, MotivA_MotivatorA_InnerDriverB_PR, MotivA_MotivatorB_InnerDriverA_PR, MotivA_MotivatorB_InnerDriverB_PR, MotivA_MotivatorC_InnerDriverA_PR, MotivA_MotivatorC_InnerDriverB_PR, MotivA_MotivatorD_InnerDriverA_PR, MotivA_MotivatorD_InnerDriverB_PR, ... MotivC_MotivatorA_InnerDriverA_PR, MotivC_MotivatorA_InnerDriverB_PR, MotivC_MotivatorB_InnerDriverA_PR, MotivC_MotivatorB_InnerDriverB_PR, MotivC_MotivatorC_InnerDriverA_PR, MotivC_MotivatorC_InnerDriverB_PR, MotivC_MotivatorD_InnerDriverA_PR, MotivC_MotivatorD_InnerDriverB_PR. Behind the designations MotivatorA etc. are of course correct terms (column names). Here, "PR" stands for Percentile Rank (0-100). A graphic represents a motive, which consists of four motivators with two variations, which then have the values from InnerDriverA_PR and InnerDriverB_PR. The final result should look like this: Is this a "Football Field Chart"? How can I implement this graph with Matplotlib? Minimal reproducible example: import pandas as pd import random import matplotlib.pyplot as plt import seaborn as sns # Create example data columns = [ 'MotivA_MotivatorA_InnerDriverA_PR', 'MotivA_MotivatorA_InnerDriverB_PR', 'MotivA_MotivatorB_InnerDriverA_PR', 'MotivA_MotivatorB_InnerDriverB_PR', 'MotivA_MotivatorC_InnerDriverA_PR', 'MotivA_MotivatorC_InnerDriverB_PR', 'MotivA_MotivatorD_InnerDriverA_PR', 'MotivA_MotivatorD_InnerDriverB_PR', 'MotivB_MotivatorA_InnerDriverA_PR', 'MotivB_MotivatorA_InnerDriverB_PR', 'MotivB_MotivatorB_InnerDriverA_PR', 'MotivB_MotivatorB_InnerDriverB_PR', 'MotivB_MotivatorC_InnerDriverA_PR', 'MotivB_MotivatorC_InnerDriverB_PR', 'MotivB_MotivatorD_InnerDriverA_PR', 'MotivB_MotivatorD_InnerDriverB_PR' ] df = pd.DataFrame(columns=columns) for i in range(1, 6): df.loc[f'Subject_{i}'] = [random.randint(0, 100) for _ in range(len(columns))] #──────────────────────────────────────────────── def create_horizontal_bar_chart(df, proband): motives = sorted(set(col.split('_')[0] for col in df.columns)) for motive in motives: columns = [col for col in df.columns if col.startswith(motive)] data = df.loc[proband, columns].reset_index() data['Motivator'] = data['index'].apply(lambda x: x.split('_')[1]) data['InnerDriver'] = data['index'].apply(lambda x: x.split('_')[2]) data['Value'] = data[proband] data = data.drop(['index', proband], axis=1) plt.figure(figsize=(10, 6)) sns.barplot(x='Value', y='Motivator', hue='InnerDriver', data=data) plt.title(f'{proband} - {motive}') plt.show() create_horizontal_bar_chart(df, 'Subject_1') However, this creates the motivators as extra bars and is still far from how I would want it, as in the example above.
Creating a "FootBall Field Chart"
It's a spine chart. The issue with that is it'll not line up your bars. So to do that, you need to get creative: import pandas as pd import random import matplotlib.pyplot as plt import seaborn as sns import numpy as np columns = [ 'MotivA_MotivatorA_InnerDriverA_PR', 'MotivA_MotivatorA_InnerDriverB_PR', 'MotivA_MotivatorB_InnerDriverA_PR', 'MotivA_MotivatorB_InnerDriverB_PR', 'MotivA_MotivatorC_InnerDriverA_PR', 'MotivA_MotivatorC_InnerDriverB_PR', 'MotivA_MotivatorD_InnerDriverA_PR', 'MotivA_MotivatorD_InnerDriverB_PR', 'MotivB_MotivatorA_InnerDriverA_PR', 'MotivB_MotivatorA_InnerDriverB_PR', 'MotivB_MotivatorB_InnerDriverA_PR', 'MotivB_MotivatorB_InnerDriverB_PR', 'MotivB_MotivatorC_InnerDriverA_PR', 'MotivB_MotivatorC_InnerDriverB_PR', 'MotivB_MotivatorD_InnerDriverA_PR', 'MotivB_MotivatorD_InnerDriverB_PR' ] df = pd.DataFrame(columns=columns) for i in range(1, 6): # 5 Probanden df.loc[f'Subject_{i}'] = [random.randint(0, 100) for _ in range(len(columns))] def create_horizontal_bar_chart(df, proband): motives = sorted(set(col.split('_')[0] for col in df.columns)) for motive in motives: columns = [col for col in df.columns if col.startswith(motive)] data = df[df.index == proband].reset_index() # Rename the new column to "Subject" data = data.rename(columns = {"index": "Subject"}) # Melt the dataframe data_melted = data.melt(id_vars=["Subject"], var_name="Motiv_Motivator_InnerDriver_PR", value_name="PR") # Create new columns from the "Motiv_Motivator_InnerDriver_PR" column data_melted[['Motiv', 'Motivator', 'InnerDriver', '_']] = data_melted['Motiv_Motivator_InnerDriver_PR'].str.split("_",expand=True) data_melted = data_melted[data_melted['Motiv'] == motive] # Drop unnecessary columns data_melted = data_melted.drop(columns=['Motiv_Motivator_InnerDriver_PR', '_']) # Reorder the columns data_melted = data_melted[['Subject', 'Motiv', 'Motivator', 'InnerDriver', 'PR']] # Pivot the table data_pivot = pd.pivot_table(data_melted, values='PR', index=['Subject', 'Motiv', 'Motivator'], columns='InnerDriver', aggfunc='first').reset_index() data_pivot['InnerDriverA'] = -data_pivot['InnerDriverA'] data_pivot = data_pivot.sort_values('Motivator', ascending=False).reset_index(drop=True) fig, ax = plt.subplots(figsize=(10, 8)) # Stacked bar chart data_pivot.plot(kind='barh', x='Motivator', y=['InnerDriverA', 'InnerDriverB'], ax=ax, stacked=True, color='#5fba7d', alpha=0.5, legend=False) ax.set_xlabel('PR') ax.axvline(0, color='grey', linewidth=4) # Add a vertical line at x=0 ax.set_xlim(-100, 100) # set x limit as -100 to 100 # Add horizontal grid lines every 25 units ax.set_xticks(range(-100, 101, 25)) ax.grid(True, axis='x', linestyle='dotted') # Adjust the x-axis tick labels to display all values as positive ax.set_xticklabels([abs(x) for x in ax.get_xticks()], fontsize=16, color='white') # Add y-axis labels yticks = np.arange(len(data_pivot)) yticklabels_left = [f'{motive} InnerDriverA' for motive in data_pivot['Motivator']] yticklabels_right = ['InnerDriverB'] * len(data_pivot) ax.set_yticks(yticks) ax.set_yticklabels(yticklabels_left, va='center', ha='right', fontsize=14, color='black') # Calculate y-tick positions for right-side labels split = len(data_pivot) intervals = np.linspace(0, 1, split + 1) # Split the number line into specified number of intervals yticks_right = (intervals[:-1] + intervals[1:]) / 2 # Compute the midpoints # Add right-side y-axis labels ax2 = ax.twinx() ax2.set_yticks(yticks_right) ax2.set_yticklabels(yticklabels_right, va='center', ha='left', fontsize=14, color='black') # Remove x and y tick marks ax.tick_params(axis='x', which='both', bottom=False, top=False) ax.tick_params(axis='y', which='both', left=False, right=False) ax2.tick_params(axis='y', which='both', left=False, right=False) # Remove border around the axes ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.spines['bottom'].set_visible(False) ax.spines['left'].set_visible(False) # Remove border around the axes ax2.spines['top'].set_visible(False) ax2.spines['right'].set_visible(False) ax2.spines['bottom'].set_visible(False) ax2.spines['left'].set_visible(False) # Add values inside the bars for i, row in data_pivot.iterrows(): value_a = row['InnerDriverA'] value_b = row['InnerDriverB'] ax.text(value_a + 2, i, str(-value_a), va='center', ha='left', color='white', fontsize=18, fontweight='bold') ax.text(value_b - 2, i, str(value_b),va='center', ha='right', color='white', fontsize=18, fontweight='bold') # Create a rectangle to set the background for bottom x-axis tick labels rect = plt.Rectangle((-.05, -0.08), 1.10, 0.08, transform=ax.transAxes, color='grey', clip_on=False) ax.add_patch(rect) plt.title(f'{proband} - {motive}') plt.show() create_horizontal_bar_chart(df, 'Subject_1') Output: and...
76383427
76387760
I am trying to build the AOSP but i always get this error. In every thread the answer is that its a RAM problem. I have tried the build with the following commands. Nothing works m -j32 m -j16 m -j12 m -j8 m -j2 m -j1 I am using a 32GB 14Core Native Linux Workstation.
AOSP build not working - failed to build some targets
After adding 20GB of Swap Storage it now works.
76389244
76389294
I want to use python scheduler module for an api call. I have two main tasks. 1-Generate token at interval of 10 mins 2-Call the api using the token generated in previous step The expected behavior is when a user calls the api it should use the same token for 10 mins and after 10 mins the toekn will be updated in background Here are the sample piece of code token=""(This is a global variable) def generate_token(): token=//Logic to generate token// def apicall(): postreq=//api call using the token// def schedule_run(): schedule.every(10).minutes.do(get_token_api) while 1: schedule.run_pending() time.sleep(1) if __name__ == "__main__": schedule_run() apicall() When I am running above code the code is getting stuck in the while loop of schedule_run() and not calling apicall() Is there any efficient way to handle this?
Use python schedule module in an efficient manner
You are getting stuck in the (infinite) loop inside schedule_run. First define the two scheduled jobs, the run the schedule waiting loop: token=""(This is a global variable) def generate_token(): token=//Logic to generate token// def apicall(): schedule.run_pending() #will update the token if it has not been done in the last 10 minutes postreq=//api call using the token// if __name__ == "__main__": schedule.every(10).minutes.do(get_token_api) apicall()
76388900
76389306
Get the average traffic from last week data per WEEK number and get the traffic data for last week Traffic(D-7) For example if date = 5/13/2023, need to output traffic data (Traffic(D-7)) for date = 5/6/2023 I manage to get the Average but no idea how to retrieve the date-7 data and output it altogether create table a ( date varchar(50), Tname varchar(50), Week varchar(5), Traffic float ) insert into a values ('5/1/2023', 'ID1', '18', 7.98) insert into a values ('5/2/2023', 'ID1', '18', 4.44) insert into a values ('5/3/2023', 'ID1', '18', 5.66) insert into a values ('5/4/2023', 'ID1', '18', 10.01) insert into a values ('5/5/2023', 'ID1', '18', 9.41) insert into a values ('5/6/2023', 'ID1', '18', 6.71) insert into a values ('5/7/2023', 'ID1', '18', 8.24) insert into a values ('5/8/2023', 'ID1', '19', 8.97) insert into a values ('5/9/2023', 'ID1', '19', 6.74) insert into a values ('5/10/2023', 'ID1', '19', 6.45) insert into a values ('5/11/2023', 'ID1', '19', 9.33) insert into a values ('5/12/2023', 'ID1', '19', 8.08) insert into a values ('5/13/2023', 'ID1', '19', 8.36) SELECT date, Tname, Week, AVG(Traffic) OVER(PARTITION BY Week) AS AVTraffic FROM a ORDER BY week http://sqlfiddle.com/#!18/538b7/3
PostgreSQL - Get value from from the same table
First of all, you need to fix your flaws in your table schema design, and declare: dates with the "DATE" type (instead of VARCHAR(50)) week values with the INT type (instead of VARCHAR(5)) traffic values with the DECIMAL type (instead of FLOAT) CREATE TABLE tab( DATE DATE, Tname VARCHAR(50), Week INT, Traffic DECIMAL(4,2) ); Once you've carried it out, you can solve this problem by: creating a ranking value for each day of the week in your weeks, using EXTRACT on your date extracting your traffic value from previous week with LAG, by partitioning on your ranking created at previous step, and ordering on the week_number. WITH cte AS ( SELECT date, Tname, Week, Traffic, ROUND(AVG(Traffic) OVER(PARTITION BY Week), 2) AS AVGTraffic, EXTRACT(ISODOW FROM date) - 1 AS week_day FROM tab ) SELECT date, Tname, Week, LAG(Traffic) OVER(PARTITION BY week_day ORDER BY Week) AS prevweek_traffic, AVGTraffic FROM cte ORDER BY Week, week_day And if you realize that you may have holes among your weeks (..., week 17, week 18, week 20, week 21, ...) and specifically want values from the exact previous week (that may be missing), you can add a filter on the LAG function, that checks if week and previous week are consecutive: ... CASE WHEN LAG(Week) OVER(PARTITION BY week_day ORDER BY Week) = Week-1 THEN LAG(Traffic) OVER(PARTITION BY week_day ORDER BY Week) END ... (in place of LAG(Traffic) OVER(...) only) Output: date tname week prevweek_traffic avgtraffic 2023-05-01T00:00:00.000Z ID1 18 null 2023-05-02T00:00:00.000Z ID1 18 null 2023-05-03T00:00:00.000Z ID1 18 null 2023-05-04T00:00:00.000Z ID1 18 null 2023-05-05T00:00:00.000Z ID1 18 null 2023-05-06T00:00:00.000Z ID1 18 null 2023-05-07T00:00:00.000Z ID1 18 null 2023-05-08T00:00:00.000Z ID1 19 7.98 2023-05-09T00:00:00.000Z ID1 19 4.44 2023-05-10T00:00:00.000Z ID1 19 5.66 2023-05-11T00:00:00.000Z ID1 19 10.01 2023-05-12T00:00:00.000Z ID1 19 9.41 2023-05-13T00:00:00.000Z ID1 19 6.71 Check the demo here. This query allows any kind of holes in your data, if that's a needed requirement. Note: The last ORDER BY clause is not needed. It's there just for visualization purposes.
76382863
76388174
I have a Firebase Realtime Database with users and the email is stored in a child field, for example: /users/00Vho6lTQke46IqpRv0D5dw6DXs2/email -> [email protected] I want to query once and get a user based on the MD5 hash of the email address. For example, MD5('[email protected]') -> 'b58c6f14d292556214bd64909bcdb118'. I don't have the email address of the user, I only have the MD5 hash. Is it possible to query using an MD5 function to transform the email field to MD5 and query on that?
Firebase RT Database Query on field value hash equality
Is it possible to query using an MD5 function to transform the email field to MD5 and query on that? No, unless you save the MD5 as a field inside the database. If you do that, you'll be able to query using the MD5. Your schema should look like this: db | --- users | --- 00Vho6lTQke46IqpRv0D5dw6DXs2 | --- email: "[email protected]" | --- md5: "b58c6f14d292556214bd64909bcdb118" And the query should look like this: val query = db.child("users").orderByChild("md5").equalTo("b58c6f14d292556214bd64909bcdb118");
76387305
76388059
Help me create an sql query. There are two databases DB1 and DB2. In each of them there are tables "users" and "cities" with the names of cities. Users and city names are the same in both databases. But the city ID values are different in the two databases. In the first DB1 database, the "users" table contains the city IDs from the "cities" table. I need to update the city data in the second DB2 database. For each "users" value from DB1, find the name "cities" and update the values in DB2.users. The problem is that the city ID values are different in the two databases. Only the names are the same.
Update + join from two databases
Use an update with join: update DB2.users set city = DB2.cities.ID from DB2.users join DB1.users on DB1.users.ID = DB2.users.ID join DB1.cities on DB1.cities.ID = DB1.users.city join DB2.cities on DB2.cities.name = DB1.cities.name The join from DB2.users to DB2.cities goes via DB1's tables, going over using user ID and coming back using city name.
76389247
76389335
I have a nested list of strings that I would like to write to a CSV file using CSV helper. So, it is a list called Entries and each element has a list of strings called Entry. My current code is exporting a blank CSV file. Entry example: {"Date", "param1", "param2", "param3", "param4"} and the Entries list will have a list of those. I would like to write to a CSV file with each entry being a new line. Intended outcome: "Date", "param1", "param2", "param3", "param4" "Date", "param1", "param2", "param3", "param4" "Date", "param1", "param2", "param3", "param4" "Date", "param1", "param2", "param3", "param4" My current code: var rawDataList = await _siteRepository.GetRawSiteData(siteGuid); var Entries = _siteRepository.GetSiteData(rawDataList); var writer = new StreamWriter("YodelTesting.csv"); var csv = new CsvWriter(writer, CultureInfo.InvariantCulture); foreach(var record in Entries) { csv.WriteRecord(record); } Help would be appreciated!
Writing a Nested List of Strings to CSV using CSV helper
I used the current code to get it working using (var stream = new MemoryStream()) using (var writer = new StreamWriter("yodelTesting.csv")) using (var reader = new StreamReader(stream)) using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture)) { foreach (var record in output) { foreach (var field in record) { csv.WriteField(field); } csv.NextRecord(); } //not sure what this does tbh writer.Flush(); stream.Position = 0; }
76382701
76388249
I've been using the Bitmap-class and it's png-ByteStream to deal with images in my WPF-application, mostly with reading/writing files, getting images from native cam libs and showing them in the UI. Lately I've got some new requirements and tried to upgrade some of that to deal with 64 bit deep images (or 48 bits, I don't need an alpha channel). In pretty much every operation bitmap is converting my image back to 32bppArgb. I've figured out deepcopying and am currently looking into Image.FromStream(), but needing to deal with this got me wondering: Is there a way to properly deal with non 32bppArgb-images within a C#-application (including true greyscale)? Or does microsoft just neglect the need for it? I've found some related questions, but mostly with 10 year old hacks, so I'd expect there to be a proper way by now... Edit: As requested, I'm gonna show some code. It's running on .netStandard2.0 to be used mostly inside .net6.0 apps; with System.Drawing.Common on Version 5.0.2 (I don't think it matters but would be willing to upgrade if that's the case). Imagine nativeImage to be a struct from some cameraLib, so we want a deepcopy to get away from the shared memory: using Bitmap bitmap = new(nativeImage.Width, nativeImage.Height, nativeImage.stride, PixelFormat.Format64bppArgb, nativeImage.DataPtr); using Bitmap copy = bitmap.DeepCopy(); //custom function using MemoryStream stream = new(); copy.Save(stream, ImageFormat.Png); byte[] byteStream = stream.ToArray(); //Saving this to file shows it's really 64 bit //These bits are stitched together for testing, normally this byte-Array might get passed around quite a bit using MemoryStream ms = new(byteStream); using Image image = Image.FromStream(ms, true); //this suddenly is 32bppArgb with: public static Bitmap DeepCopy(this Image original) { // this just copies the code of the Bitmap(Image)-constructor but uses the original pixelFromat instead of 32bppArgb var result = new Bitmap(original.Width, original.Height, original.PixelFormat); using (Graphics g = Graphics.FromImage(result)) { g.Clear(Color.Transparent); g.DrawImage(original, 0, 0, original.Width, original.Height); } return result; }
How to deal with 48 or 64bit images in C#
I do not think you will have much success with System.Drawing.Bitmap. In my experience these just have poor support high bit depths. I would instead take a look at the corresponding System.Windows.Media classes. I know PngBitmapEncoder/decoder at least support 16 bit grayscale, but I suspect 48/64 bit works fine as well. For display I would expect that you would want to do your own tone mapping, since I have never seen any builtin method do a particular good job. If you want a format for interchange I would suggest creating your own. Any image is essentially represented using just a few properties: Width Height Stride - The number of bytes on a row. This must be at least large enough to fit a row of pixels, but may be larger for alignment reasons. PixelFormat Pixel Data - This can essentially anything representing binary data. Byte[], a pointer, a stream, Memory<T> etc. The total size of the data should be Height * Stride. Most image processing libraries should have methods accepting raw image data. You might need to jump thru some hoops to get it to work, like using unsafe code or doing a blockCopy. You also likely need some mapping code to convert between all the various PixelFormat enums. Libraries typically provide Bitmap access for convenience, but convert this data to some internal format as soon as possible. You should be able to do the same.
76387881
76388086
The Ansible documentation has a list of Variable precedence Some of them are clear to me but I wonder whether anybody coulde kinkdy shed some light on those 2. 20. role (and include_role) params 21. include params usage, location, syntax. I am trying to get Variable declaration a little further to the surface inside a little more complex playbook utilizing 2 roles I am currently working on. concrete ... some values inside task files of a role should rather be declared as variables in a single location.
Ansible Variable precedence - what are 'role params' & 'include params'
In a nutshell examples: role params (full playbook) - name: I'm a dummy play hosts: localhost roles: - role: somerole vars: param1: "I'm a role param" Include role params (task only) - name: Including role somerole ansible.builtin.include_role: name: somerole vars: param1: "I'm an include role param" Include params (task only) - name: Including a task file ansible.builtin.include_tasks: sometasks.yaml vars: param1: "I'm an include param" As a fictive and (most probably bad practice) example: if you include a role passing a parameter and later include a task file in that role passing that same parameter with a different value, the include param will take precedence over the role param.
76389245
76389353
I came across this code somewhere when practicing basic C questions: int func(num) { if (num > 0) { return (num + func(num - 2)); } } int main() { printf("%d\n", func(5)); return 0; } The code when executed returns 8 as the answer. But I think that the answer should be "cannot be determined". The recursive calls in my head looks like this: 5 + func(3) => 5 + (3 + func(1)) 3 + func(1) => 3 + (1 + func(-1)) 1 + func(-1) => 1 + (?) For the ? symbol, I think reading the return value for func(-1) is undefined behavior. I'm basing this on the assumption that for func(-1) no explicit int value is being returned. So reading from func(-1) in the expression (1 + func(-1)) should produce some garbage result + 1 in my opinion. Why is that the code is returning 8 as the answer and not something garbage? When I explicitly pass a negative number to func and read the result I do get the garbage value, such as in this code; int main() { printf("%d\n", (1 + func(-1))); // returns garbage result and not 0 return 0; } Why is that in the recursive call (1 + func(-1)) is being evaluated to 0 value? I compiled the code on a 64-bit machine with gcc as gcc myfile.c. Compiling the code as gcc -W -Wall -ansi -pedantic myfile.c gives warning about the func function, but that's not the point. I am unable to figure out how 8 is the answer.
Why garbage value is not being returned in a recursive call to a function with undefined behavior?
Why garbage value is not being returned in a recursive call to a function with undefined behavior? Because "garbage" does not mean what you think it means in this context. In particular, "garbage" does not mean "random". (Or if it does, it's more in the sense of xkcd 221.) Computers are usually deterministic. You have to work pretty hard to get truly random behavior. Even a program that contains the worst kind of undefined behavior will quite often return exactly the same strange and indeterminate number every time you run it. I have used this analogy: Suppose you go to the store and buy a brand-new garbage can. But it's completely clean! There's no garbage in it at all! It's so clean you could eat out of it! Was this false advertising? Did the store fraudulently sell you a non-garbage can? See more discussion at these previous questions: 1 2 3 4 5. (Most of those are talking about the values of uninitialized local variables, not the values of functions that fail to execute a proper return statement, but the arguments are the same.)
76387483
76388088
I require to fetch the count of PRODUCT_ID between 00:15:00 and 01:15:00 and subsequentially for any date range. Example Scripts:- My DB structure and data is as follows. CREATE TABLE time1 (cr_date date , product_id number ); insert into time1 values (to_date ('01-JAN-2022 01:00:00', 'DD_MON-YYYY HH:MI:SS') , 12345); insert into time1 values (to_date ('01-JAN-2022 01:00:00', 'DD_MON-YYYY HH:MI:SS') , 12346); insert into time1 values (to_date ('01-JAN-2022 01:00:00', 'DD_MON-YYYY HH:MI:SS') , 12347); insert into time1 values (to_date ('01-JAN-2022 03:30:00', 'DD_MON-YYYY HH:MI:SS') , 42345); insert into time1 values (to_date ('01-JAN-2022 03:30:00', 'DD_MON-YYYY HH:MI:SS') , 42346); insert into time1 values (to_date ('01-JAN-2022 03:35:00', 'DD_MON-YYYY HH:MI:SS') , 42347); insert into time1 values (to_date ('01-JAN-2022 03:40:00', 'DD_MON-YYYY HH:MI:SS') , 42348); insert into time1 values (to_date ('01-JAN-2022 10:40:00', 'DD_MON-YYYY HH:MI:SS') , 10348); insert into time1 values (to_date ('01-JAN-2022 10:42:00', 'DD_MON-YYYY HH:MI:SS') , 10349); insert into time1 values (to_date ('01-JAN-2022 10:43:00', 'DD_MON-YYYY HH:MI:SS') , 11348); COMMIT; Output is required to be as below:- | hours | count | |:------ |:------| |00:15:00 |3| |01:15:00 |0| |02:15:00 |0| |03:15:00 |0| |04:15:00 |4| |05:15:00 |0| |06:15:00 |0| |07:15:00 |0| |08:15:00 |0| |09:15:00 |0| |10:15:00 |0| |11:15:00 |3| |.. |... |23:15:00 |0|
SQL to fetch the count between tow date range
Sample data you (initially) posted is pretty much useless, there's no time component involved. In one of my tables, there's a datum column and values look like this: SQL> SELECT id, TO_CHAR (datum, 'hh24:mi') hrs FROM obr WHERE rownum <= 10; ID HRS ---------- ----- 21547 08:41 21541 08:17 21563 09:03 21614 10:46 21618 11:01 21620 11:04 21622 11:05 21626 11:10 21629 11:14 21642 13:35 10 rows selected. SQL> This is query which in fmin CTE creates 24 rows (00:15, 01:15, ... 23:15) join is done on datum "rounded" to previous 15-minute value (whether in the same hour, or in previous hour - depends on minutes) So: SQL> WITH 2 fmin 3 AS 4 ( SELECT TRUNC (SYSDATE) + (LEVEL - 1) / 24 + INTERVAL '15' MINUTE c_time 5 FROM DUAL 6 CONNECT BY LEVEL <= 24) 7 SELECT TO_CHAR (f.c_time, 'hh24:mi') c_time, COUNT (z.id) cnt 8 FROM fmin f 9 LEFT JOIN obr z 10 ON TO_CHAR (f.c_time, 'hh24:mi') = 11 TO_CHAR ( 12 TRUNC (datum, 'hh24') 13 + CASE 14 WHEN TO_NUMBER (TO_CHAR (datum, 'mi')) >= 15 15 THEN 16 INTERVAL '15' MINUTE 17 WHEN TO_NUMBER (TO_CHAR (datum, 'mi')) < 15 18 THEN 19 INTERVAL '-45' MINUTE 20 END, 21 'hh24:mi') 22 GROUP BY TO_CHAR (f.c_time, 'hh24:mi') 23 ORDER BY 1; Result: C_TIM CNT ----- ---------- 00:15 0 01:15 0 02:15 0 03:15 0 04:15 0 05:15 0 06:15 0 07:15 2 08:15 10 09:15 1 10:15 14 11:15 6 12:15 10 13:15 38 14:15 5 15:15 0 16:15 0 17:15 0 18:15 0 19:15 0 20:15 0 21:15 0 22:15 0 23:15 0 24 rows selected. SQL>
76383461
76388255
ID_1 ID_2 ID_3 LANG 1 11 111 F_lang 1 11 111 null 2 22 222 null Is it possible to build a query which returns only these two lines below. It should be grouped by the first three columns and should take only that line with value of column LANG which is preferably not null, if no value for LANG exists, it should take the line with language null. ID_1 ID_2 ID_3 LANG 1 11 111 F_lang 2 22 222 null
DB2 Group by specific columns
Does this anwser the question ? with t1(id_1, id_2, id_3, lang) as ( VALUES ('1', '11', '111', 'F_lang'), ('1', '11', '111', NULL), ('2', '22', '222', NULL), ('3', '33', '333', 'F_lang_3_1'), ('3', '33', '333', 'F_lang_3_2'), ('3', '33', '333', NULL) ) select id_1, id_2, id_3, lang from t1 where lang is not null union all select id_1, id_2, id_3, null as lang from t1 group by id_1, id_2, id_3 having max(lang) is null order by id_1, id_2, id_3 ID_1 ID_2 ID_3 LANG 1 11 111 F_lang 2 22 222 null 3 33 333 F_lang_3_1 3 33 333 F_lang_3_2 fiddle
76382933
76388472
I am on thin ice both syntax-wise and English-wise here, apologise if my way of expressing maths here is a rambling of a mad-man. I have two sequences/"functions" f(n) and g(n). They are technically not functions, I have them just defined as a sequence of repeating modulus of 9 and 10. n = 0,1,2,3,... f(n): nMOD9={0,3,4,6} g(n): nMOD10={0,3,4,5,8,9} This means that f(n) is will go 0,3,4,6,9,12,13,15,18,ect. Just repeating the four numbers in the brackets of modulus of 9. g(n) will be 0,3,4,5,8,9,10,13,14,15,18,19,20,ect. Repeating the six numbers in brackets of modulus of 10. Now I wonder, can I express h(n) which is the list of numbers that is both present in f(n) and g(n)? This will be 0,3,4,9,13,15,18,ect. Either as a function or as some nMODx={a,b,c}? Or some other genius way I have not thought about. Currently I do a manual check of both list, and I wonder if can be done more elegant.
How to express h(n) which is the "union" of the two sequences f(n) & g(n)
One possible solution would be to find all values of h(n) up to 90, then h(n) will be those nMOD90, so h(n): nMOD90={0,3,4,9,13,15,18,24,30,33,39,40,45,48,49,54,58,60,63,69,75,78,84,85}
76388049
76388104
I recently changed the dependency <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4</version> </dependency> to <dependency> <groupId>javax.mail</groupId> <artifactId>javax.mail-api</artifactId> <version>1.6.0</version> </dependency> because Java 8 doesn't support 1.4 version as it uses TLS 1.0. After changing the dependency this code starts giving error. Error code: if (p.getContentType().contains("image/")) { File f = new File("image" + new Date().getTime() + ".jpg"); DataOutputStream output = new DataOutputStream( new BufferedOutputStream(new FileOutputStream(f))); com.sun.mail.util.BASE64DecoderStream test = (com.sun.mail.util.BASE64DecoderStream) p .getContent(); byte[] buffer = new byte[1024]; int bytesRead; while ((bytesRead = test.read(buffer)) != -1) { output.write(buffer, 0, bytesRead); } Eclipse suggestion error: Multiple markers at this line - com.sun.mail.util.BASE64DecoderStream cannot be resolved to a type - com.sun.mail.util.BASE64DecoderStream cannot be resolved to a type
"BASE64DecoderStream" gives error for javax-mail dependency
You're using the wrong dependency. You have only added the JakartaMail API. You should use the JakartaMail 1.6.x implementation: <dependency> <groupId>com.sun.mail</groupId> <artifactId>jakarta.mail</artifactId> <version>1.6.7</version> </dependency> This dependency does include com.sun.mail.util.BASE64DecoderStream. As an aside, instead of what you're doing now (using getContent() and casting to an implementation-specific class), you could also use Part.getInputStream(): try (var in = p.getInputStream()) { in.transferTo(output); } Also, your use of DataOutputStream is suspect, as you don't seem to be using any DataOutputStream specific methods, so using the BufferedOutputStream directly should be sufficient.
76389296
76389365
According to this statement, pygame.time.Clock().tick() should be called at the end of the main loop. However, I couldn't see any differences on my screen display regardless where I executed that method within the loop. Could someone please give some clarification on this? Thanks
Where should pygame.time.Clock().tick() be called in the script
The documentation say : A call to the tick() method of a Clock object in the game loop can make sure the game runs at the same speed no matter how fast of a computer it runs on So it is better to call it at the end of the loop because if you do it in the middle of your display fonction, a part of the element will be refresh before the wainting and a part after.You should call pygame.display.update() before that otherwise you refresh the screen after the "frame wait time".
76383813
76388631
I am trying to run scripts on a remote server with DataSpell, so I am trying to configure a remote interpreter. I follow these instructions and I get the error Cannot Save Settings: SSH Python x.x.x user@IP : Python x.x.x (path/to/interpreter) can't be used as a workspace interpreter As a path to the interpreter I have used the .../bin/python file that is created in a virtual environment. I have tried virtual environments created both with conda and venv. I have also found this, but I cannot understand the solution clearly. Any ideas?
Remote Interpreter DataSpell
It appears that Dataspell can have different interpreters for its workspace and projects or notebooks that are opened in the workspace. I right-clicked the project folder in the Project window, and selected a remote interpreter just for the project. This is what the second source that I posted on my OP says. This works fine.
76382654
76388806
I'm trying to implement on-behalf-of user flow for Microsoft Graph API. I'm using Microsoft Graph .NET SDK and I'm encountering Access Denied (status code 403) for my requests. I'm not sure how to correctly set up permissions for users of my organization in Active Azure Directory. I'm trying to implement SharePoint / OneDrive file operations like list files/download/upload etc. with on-behalf-of flow. I want to be able to access any drive/site files and be able to upload/download/list. The way my application is designed is like this: But instead of Desktop WPF app, I have next.js website using next-auth. This is how I define my Azure Active Directory provider to sign-in in my next.js application using next-auth: providers: [ AzureADProvider({ clientId: process.env.AZURE_AD_CLIENT_ID, // NextAuth client id in AAD clientSecret: process.env.AZURE_AD_CLIENT_SECRET, // NextAuth client secret value in AAD tenantId: process.env.AZURE_AD_TENANT_ID, authorization: { params: { scope: "openid email profile offline_access api://NextAuthAPI/.default", prompt: "consent", } }, }), ] In my AAD I registered my frontend app and web api as NextAuth and NextAuthAPI, generated client secret for both of them. For the NextAuthAPI (webAPI) I created a scope: and added these permissions: For the NextAuth (next.js) app I added this permission: After signing in my next.js frontend application, I'm able to receive an access token with: "aud": "api://NextAuthAPI", "scp": "access_as_user" .... Then I provide this access_token to the webAPI and create client like this: var scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = "c0dc-------------------5ce7"; var clientId = "087e66-----------------------b5"; var clientSecret = "p5Q8Q-----------------------JddpK"; // using Azure.Identity; var options = new OnBehalfOfCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; var oboToken = /* HERE I PASTE ACCESS_TOKEN FROM NEXTJS APP */; var onBehalfOfCredential = new OnBehalfOfCredential(tenantId, clientId, clientSecret, oboToken, options); m_GraphClient = new GraphServiceClient(onBehalfOfCredential, scopes); Now, when I have the m_GraphClient initialized, I'm trying to make some requests. This request works for me: var response = await m_GraphClient.Me.GetAsync(); But something like this: var response2 = await m_GraphClient.Sites.GetAllSites.GetAsync(); throws an ODataError exception with status code 403 and message "Access denied". Packages I use: Microsoft.Graph 5.11.0 (backend) Azure.Identity 1.9.0 (backend) Next.JS 13.4.5-canary.2 (frontend) next-auth 4.22.1 (frontend)
Microsoft Graph .NET SDK on-behalf-of flow returns Access Denied 403 for Sites/Files
You want to be able to access any drive/site files and be able to upload/download/list. Then on-behalf-of flow isn't suitable for you. OBO flow will generate access token with Delegated API permission, which can't be used to access any others' drive/site files. We need to use access token with Application API permission. Then in this scenario, we can only use client credential flow so that we can generate access token with Application API permission. First, grant application API permission in Azure portal. For example, we are trying to use this Graph API. Then in your API application, your code should look like this which used ClientSecretCredential instead of your OnBehalfOfCredential: var scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = "tenant_id"; var clientId = "client_id"; var clientSecret = "client_secret"; var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret); var graphClient = new GraphServiceClient(clientSecretCredential, scopes);
76387995
76388110
I am trying to copy some files from Firebase into a Gitlab repo. Using my personal SSH credentials, I am able to do this. I'd like to use deploy tokens. I generated a fresh token and gave it all the permissions I can. However, when I run the CI pipeline I get an "fatal: Authentication failed"
Gitlab CI Deploy Token cant push to repo
A deploy token is simply not made for that. As per the documentation a deploy token is not capable to write to a GitLab repository. Here is what you can do with a deploy token: Clone Git repositories. Pull from and push to a GitLab container registry. Pull from and push to a GitLab package registry. What may be a better fit for your use case is a so called Deploy Key. If the deploy key has read-write permissions you should be able to solve your issues.
76389214
76389384
I have a simple countdown timer which is launched on the click of the .btn-submit-a button. I tried to make it so that after the timer ends, button A is replaced by button B (.btn-submit-b), but, unfortunately, nothing comes out. How can I achieve this? I will be glad for any help. jQuery(function($){ $('.btn-submit-a').on('click', doCount); }); function doCount() { var timeleft = 5; var downloadTimer = setInterval(function(){ if(timeleft <= 0){ clearInterval(downloadTimer); document.getElementById("countdown").innerHTML = "Time is Up"; } else { document.getElementById("countdown").innerHTML = timeleft + "<span class='remain'>seconds remain</span>"; } timeleft -= 1; }, 1000); }; .btn-submit-b { display: none; } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <button type="submit" class="btn-submit-a">Button A</button> <button type="submit" class="btn-submit-b">Button B</button> <div id="countdown"></div>
Replace one button with another when countdown timer ends
You had a weird combination between jquery and native code. You can do that ofcourse but I would recommend to stick to either jQuery or native where possible. Therefore I changed some code to jQuery functions. This said you can hide and show elements with the hide() and show() jQuery functions as you can see in the example below. If you want to toggle them instead you can use toggle() jQuery(function($) { $('.btn-submit-a').on('click', doCount); }); function doCount() { var timeleft = 5; var downloadTimer = setInterval(function() { if (timeleft <= 0) { clearInterval(downloadTimer); $("#countdown").html("Time is Up"); $('.btn-submit-a').hide(); $('.btn-submit-b').show(); } else { $("#countdown").html(timeleft + "<span class='remain'>seconds remain</span>"); } timeleft -= 1; }, 1000); }; .btn-submit-b { display: none; } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <button type="submit" class="btn-submit-a">Button A</button> <button type="submit" class="btn-submit-b">Button B</button> <div id="countdown"></div>
76389116
76389387
I have a table (description of table): Name Type KEYVALUE VARCHAR2(100) TEXT CLOB Example Keyvalue Text 101 Customer Input 05/15/2023 07:20:20 My name is ABX +++ Private Notes What is you name+++Customer Input 04/30/2023 19:40:58 I have issue related to water purifier purchased on Jan 23 +++ Public Notes 04/30/2023 18:19:18 +++Customer Input 04/30/2023 Requesting to send a technicial, we could not bring them up due to the same issue that was looked into in ticket 20092. We dont know if this is the same issue as the previous ticket, but need to know the reason Language Preference: English 102 Customer Input 05/15/2023 07:20:20 20424596 Reference to the above ticket+++Customer Input 04/30/2023 19:40:58 Plesae replace the item as this is a faulty one +++ Public Notes 04/30/2023 18:19:18 +++Customer Input 04/30/2023 17:54:54 Shared the faulty machine pics for quick action Problem Context: When was the issue first observed? - 4/30, 1AM or so Were there any recent changes or maintenance performed? - Language Preference: English I am basically splitting the entire phase to multiple rows by the word "customer Input". Something like below: SELECT distinct keyvalue, level pos, trim(regexp_substr(text, 'Customer Input[^+++]*', 1, level)) x FROM ( SELECT 101 as keyvalue,'Customer Input 05/15/2023 07:20:20 My name is ABX +++ Private Notes What is you name+++Customer Input 04/30/2023 19:40:58 I have issue related to water purifier purchased on Jan 23 +++ Public Notes 04/30/2023 18:19:18 +++Customer Input 04/30/2023 Requesting to send a technicial, we could not bring them up due to the same issue that was looked into in ticket 20092. We dont know if this is the same issue as the previous ticket, but need to know the reason Language Preference: English| ' as text from dual union all SELECT 102 as keyvalue,' Customer Input 05/15/2023 07:20:20 20424596 Reference to the above ticket+++Customer Input 04/30/2023 19:40:58 Plesae replace the item as this is a faulty one +++ Public Notes 04/30/2023 18:19:18 +++Customer Input 04/30/2023 17:54:54 Shared the faulty machine pics for quick action Problem Context: When was the issue first observed? - 4/30, 1AM or so Were there any recent changes or maintenance performed? - Language Preference: English| ' as text from dual ) t CONNECT BY instr(text, 'Customer Input', 1, level - 1) > 0 order by keyvalue; keyvalue pos text 101 1 Customer Input 05/15/2023 07:20:20 My name is ABX 101 2 Customer Input 04/30/2023 19:40:58 I have issue related to water purifier purchased on Jan 23 101 3 Customer Input 04/30/2023 Requesting to send a technicial, we could not bring them up due to the same issue that was looked into in ticket 20092. We dont know if this is the same issue as the previous ticket, but need to know the reason Language Preference: English 101 4 102 1 Customer Input 05/15/2023 07:20:20 20424596 Reference to the above ticket 102 2 Customer Input 04/30/2023 19:40:58 Please replace the item as this is a faulty one 102 3 Customer Input 04/30/2023 17:54:54 Shared the faulty machine pics for quick action Problem Context: When was the issue first observed? - 4/30, 1AM or so Were there any recent changes or maintenance performed? - Language Preference: English 102 4 This is working fine since the text column is of character datatype. But when I am running this below query (on the actual column which is of clob datatype) SELECT distinct keyvalue, level pos, trim(regexp_substr(customer_input_info, 'Customer Input[^+++]*', 1, level)) str FROM (select 101 as keyvalue,to_clob('Customer Input 05/15/2023 07:20:20 20424596 Reference to the above ticket+++Customer Input 04/30/2023 19:40:58 Plesae replace the item as this is a faulty one +++ Public Notes 04/30/2023 18:19:18 +++Customer Input 04/30/2023 17:54:54 Shared the faulty machine pics for quick action Problem Context: When was the issue first observed? - 4/30, 1AM or so Were there any recent changes or maintenance performed? - Language Preference: English| ') as customer_input_info from dual) t CONNECT BY instr(customer_input_info, 'Customer Input', 1, level - 1) > 0 order by 1 I am getting below error ORA-00932: inconsistent datatypes: expected - got CLOB 00932. 00000 - "inconsistent datatypes: expected %s got %s" *Cause: *Action: Error at Line: 44 Column: 38. I can't make changes to inner sql query as the source table is of clob data type. What changes should I make to outer query.
Clob Issues- Splitting a clob column to multiple rows
The code you posted will get that error if the source text is a CLOB, whatever the length. The problem isn't the length itself, it's that each split row value is also a CLOB, and you can't use distinct with CLOBs. The use of distinct is often a sign that there's a deeper problem that's just covering up. Without it you do get duplicates, but that's a well-known issue with connect-by queries against multiple source rows, and will get progressively worse with more rows. You need to limit the connect-by to the same source row, which is simple assuming keyvalue is unique; but you also need to introduce a non-deterministic function call to prevent it ballooning the results, for example: CONNECT BY instr(text, 'Customer Input', 1, level - 1) > 0 AND keyvalue = PRIOR keyvalue AND PRIOR dbms_random.value IS NOT NULL fiddle You might finder it easier to understand and maintain if you switch to using recursive subquery factoring instead of a hierarchical query: WITH r (keyvalue, text, pos, x) as ( SELECT keyvalue, text, 1, trim(regexp_substr(text, 'Customer Input[^+++]*', 1, 1)) FROM t UNION ALL SELECT keyvalue, text, pos + 1, trim(regexp_substr(text, 'Customer Input[^+++]*', 1, pos + 1)) FROM r WHERE instr(text, 'Customer Input', 1, pos) > 0 ) SELECT keyvalue, pos, x FROM r order by keyvalue, pos; KEYVALUE POS X 101 1 Customer Input 05/15/2023 07:20:20 My name is ABX 101 2 Customer Input 04/30/2023 19:40:58 I have issue related to water purifier purchased on Jan 23 101 3 Customer Input 04/30/2023 Requesting to send a technicial, we could not bring them up due to the same issue that was looked into in ticket 20092. We dont know if this is the same issue as the previous ticket, but need to know the reason Language Preference: English| 101 4 102 1 Customer Input 05/15/2023 07:20:20 20424596 Reference to the above ticket 102 2 Customer Input 04/30/2023 19:40:58 Plesae replace the item as this is a faulty one 102 3 Customer Input 04/30/2023 17:54:54 Shared the faulty machine pics for quick action Problem Context: When was the issue first observed? - 4/30, 1AM or so Were there any recent changes     or maintenance performed? - Language Preference: English| 102 4 fiddle I've left it with the same stop condition, which generates a final null entry. You may want to revisit that, for either approach.
76383547
76389112
On an isolated network (without internet access to do public IP address lookups), I want to run a playbook from a controller against a number of target hosts where one of the tasks is to download a file via HTTP/HTTPS from the controller without hard-coding the controller IP as part of the task. E.g. Controller: 192.168.0.5 Target 1: 192.168.0.10 Target 2: 192.168.0.11 Target 3: 192.168.0.12 The controller can have different IPs configured via DHCP, and there could be multiple network interfaces listed in ansible_all_ipv4_addresses (some of which may not be available to the target hosts) so it may not be straight forward to determine which network interface the target hosts should use from ansible_facts on localhost without exploring the idea of looping through them with a timeout until the file has been downloaded. It seems as though the most robust way to determine the public IP of the controller (assuming the web server is listening on 0.0.0.0) would be to determine the originating IP of the established connection (192.168.0.5) from the target host - is there a way to do this? The motivation for downloading the file from the controller rather than sending it to remote hosts is that some of the target hosts are running Windows and the win_copy module is incredibly slow via WinRM so the Ansible documentation includes the following note: Because win_copy runs over WinRM, it is not a very efficient transfer mechanism. If sending large files consider hosting them on a web service and using ansible.windows.win_get_url instead.
How can I get Ansible client IP from target host?
Limited test on my machine which has a single ip and with a single target. But I don't see why it would not work in your scenario. Given the following inventories/default/hosts.yml all: hosts: target1: ansible_host: 192.168.0.10 target2: ansible_host: 192.168.0.11 target3: ansible_host: 192.168.0.12 The following test playbook should do what you expect. Replace the dummy debug task with get_url/uri to initiate the download. Notes: this playbook infers you have access to the ip command line tool on the controller. I took for granted that the controller IP used to connect to target is the one that the target has access to in the other direction. If this isn't the case then the below will not work in your situation. --- - hosts: all gather_facts: false tasks: - name: Check route on controller for each target destination ansible.builtin.command: ip route get {{ ansible_host }} register: route_cmd delegate_to: localhost - name: Register the controller outgoing ip for each target ansible.builtin.set_fact: controller_ip: "{{ route_cmd.stdout_lines[0] | regex_replace('^.* src (\\d*(\\.\\d*){3}).*$', '\\1') }}" - name: Show result ansible.builtin.debug: msg: "I would connect from target {{ inventory_hostname }} ({{ ansible_host }}) to controller using ip {{ controller_ip }}"
76387851
76388128
My array of objects looks like this: [{"data": [5, 2, 7, 2, 4, 2], "date": {"begin": "03.07.", "beginYear": "2023", "end": "10.07.", "endYear": "2023", "timestamp": 1688335200000}}, {"data": [5, 2, 7, 2, 4, 2], "date": {"begin": "26.06.", "beginYear": "2023", "end": "03.07.", "endYear": "2023", "timestamp": 1687730400000}}, {"data": [5, 2, 7, 2, 4, 2], "date": {"begin": "19.06.", "beginYear": "2023", "end": "26.06.", "endYear": "2023", "timestamp": 1687125600000}}, {"data": [5, 2, 7, 2, 4, 2], "date": {"begin": "12.06.", "beginYear": "2023", "end": "19.06.", "endYear": "2023", "timestamp": 1686520800000}}, {"data": [5, 2, 7, 2, 4, 2], "date": {"begin": "05.06.", "beginYear": "2023", "end": "12.06.", "endYear": "2023", "timestamp": 1685916000000}}, {"data": [5, 2, 7, 2, 4, 2], "date": {"begin": "29.05.", "beginYear": "2023", "end": "05.06.", "endYear": "2023", "timestamp": 1685311200000}}] I want to map through the array, checking if the data key contains more than 4 numbers and if so, I want to extract the first 4 numbers (each data key is the same in all objects) and put it into a new object key "data" and the date key should look like the first item of the original array. The rest of the data key (in this example [4,2] should go into a new array that will be filled up with 0s until the data's length is 4. The date key of this should contain the date in 4 weeks starting from the first date. I want to modify it so the result will be like this: [{ data: [ 5, 2, 7, 2 ], date: { begin: '29.05.', beginYear: '2023', end: '05.06.', endYear: '2023', timestamp: 1685311200000 } }, { data: [4, 2, 0, 0 ], date: { begin: '26.06.', beginYear: '2023', end: '03.07.', endYear: '2023', timestamp: 1687730400000 } } ] The logic should also applicable if the data key contains more numbers I tried it several times but somehow it doesn't as planned. const fourStack = [] const firstFourItems = result.slice(0, 4) const restItems = result.slice(4) const firstItem = firstFourItems[0] const newData = firstFourItems.map((item) => item.data.slice(0, 4)) const newObj = { data: newData, date: { begin: firstItem.date.begin, beginYear: firstItem.date.beginYear, end: firstItem.date.end, endYear: firstItem.date.endYear, timestamp: firstItem.date.timestamp } } fourStack.push(newObj) restItems.forEach((item) => { fourStack.push({ data: item.data.slice(0, 4), date: { begin: newObj.date.begin, beginYear: newObj.date.beginYear, end: newObj.date.end, endYear: newObj.date.endYear, timestamp: newObj.date.timestamp } }) }) It gives me, not the result I want. Instead, it gives me this: [{"data": [[Array], [Array], [Array], [Array]], "date": {"begin": "03.07.", "beginYear": "2023", "end": "10.07.", "endYear": "2023", "timestamp": 1688335200000}}, {"data": [5, 2, 7, 2], "date": {"begin": "03.07.", "beginYear": "2023", "end": "10.07.", "endYear": "2023", "timestamp": 1688335200000}}, {"data": [5, 2, 7, 2], "date": {"begin": "03.07.", "beginYear": "2023", "end": "10.07.", "endYear": "2023", "timestamp": 1688335200000}}]
How can I split an array of objects and extract specific keys while adding 0s to fill up to a specific length?
let arr = [{ "data": [5, 2, 7, 2, 4, 2], "date": { "begin": "03.07.", "beginYear": "2023", "end": "10.07.", "endYear": "2023", "timestamp": 1688335200000 } }, { "data": [5, 2, 7, 2, 4, 2], "date": { "begin": "26.06.", "beginYear": "2023", "end": "03.07.", "endYear": "2023", "timestamp": 1687730400000 } }, { "data": [5, 2], "date": { "begin": "19.06.", "beginYear": "2023", "end": "26.06.", "endYear": "2023", "timestamp": 1687125600000 } }, { "data": [5, 2, 7, 2, 4, 2], "date": { "begin": "12.06.", "beginYear": "2023", "end": "19.06.", "endYear": "2023", "timestamp": 1686520800000 } }, { "data": [5, 2, 7, 2, 4, 2], "date": { "begin": "05.06.", "beginYear": "2023", "end": "12.06.", "endYear": "2023", "timestamp": 1685916000000 } }, { "data": [5], "date": { "begin": "29.05.", "beginYear": "2023", "end": "05.06.", "endYear": "2023", "timestamp": 1685311200000 } }]; arr.map((e) => { if (e.data.length > 4) { e.data = e.data.slice(0, 4); } else { let length = 4 - e.data.length; for (let i = 0; i < length; i++) { e.data.push(0); } } }) console.log(arr);
76382277
76389186
I am using PHP 8.2 to read our JFrog Artifactory API and this works fine. Now I am in the need to either create or update some of the properties on an artifact - so either create it if it does not exists, and update it if it does exists. For example I can read all properties for a specific artifactory with this code: <?PHP // The full URL for the specific artifact and its "properties" $api = "https://mysrv/api/storage/TestProduct/TestComponent/1.0.0/?properties"; $ch = curl_init($api); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_HTTPHEADER, array( "X-JFrog-Art-Api: ". $artifactoryKey, )); // Execute the request to retrieve existing properties $response = curl_exec($ch); echo "<pre>"; var_dump($response); ?> This will dump something like this: string(123) "{ "properties" : { "ComponentName" : [ "TestComponent" ], "ContactEmail" : [ "[email protected]" ], "ContactName" : [ "John Doe" ], "VersionNumber" : [ "1.0.0" ] }, "uri" : "https://mysrv/api/storage/TestProduct/TestComponent/1.0.0" }" Now I want to create a new key/value pair inside the properties. For example if I want to create a new key named MyKey with a value of False then I would like to see this result: string(123) "{ "properties" : { "ComponentName" : [ "TestComponent" ], "ContactEmail" : [ "[email protected]" ], "ContactName" : [ "John Doe" ], "VersionNumber" : [ "1.0.0" ] "MyKey" : [ "False" ] }, "uri" : "https://mysrv/api/storage/TestProduct/TestComponent/1.0.0" }" I have of course tried various solutions like: // Define the new key/value pair $newProperty = array( 'key' => "MyKey", 'value' => "False" ); // Prepare the JSON payload $payload = json_encode(array("properties" => array($newProperty))); // Initialize cURL to update the properties $ch = curl_init($api); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'PUT'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Content-Type: application/json', 'X-JFrog-Art-Api: ' . $artifactoryKey )); curl_setopt($ch, CURLOPT_POSTFIELDS, $payload); // Execute the request to update the properties $response = curl_exec($ch); echo "<pre>"; var_dump($response); ... and various tweaks to this but I cannot get it to work. Doing the above solution will give me this error: string(98) "{ "errors" : [ { "status" : 400, "message" : "Properties value cannot be empty." } ] }" My account should have anough access to write/modify (I am not the admin but have been told so) but the error message does not look like it is related to permissions either, so any help on this would be appreciated - I expect this may be a simple fix as I have not much experience with handling Artifactory stuff :-)
How to create/update key/value pair in JFrog Artifactory with PHP and cURL?
Actually I did figure out this myself and it was a stupid/simple mistake. I easily solved this after finding the right help page - I should probably have invested more time in finding this in the first place :-) Let me show the solution first: <?PHP // Full URL for specific artifact and the property to create/update $api = "https://mysrv/api/storage/TestProduct/TestComponent/1.0.0/?properties=MyKey=False"; // Initialize cURL to create/update the properties $ch = curl_init($api); curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "PUT"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); curl_setopt($ch, CURLOPT_HTTPHEADER, array( "Content-Type: application/json", "X-JFrog-Art-Api: ". $artifactoryKey )); // Execute the cURL request $response = curl_exec($ch); // Dump the result echo "<pre>"; var_dump($response); // Check if the request was successful $httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE); var_dump($httpCode); // Close the cURL session curl_close($ch); ?> This code will result in a HTTP Status 204 (No Content) which equals a success and it will create the new propery named MyKey with the value of False. The cURL PUT command will work for both creating it or updating the value. Found the help for it here, https://jfrog.com/help/r/jfrog-rest-apis/set-item-properties
76388109
76388143
I have been training my custom Image classification model on the PyTorch transformers library to deploy to hugging face however, I cannot figure out how to export the model in the correct format for HuggingFace with its respective config.json file. I'm new to PyTorch and AI so any help would be greatly appreciated train.py from tqdm import tqdm best_accuracy = 0 # Train the model for a number of epochs for epoch in range(20): # Create a progress bar for this epoch pbar = tqdm(train_loader, desc=f'Epoch {epoch+1}/{20}') # Loop over each batch of data for X_batch, y_batch in pbar: # Move the batch of data to the device X_batch = X_batch.to(device) y_batch = y_batch.to(device) # Zero the gradients... # Define an optimizer... # Update the progress bar pbar.set_postfix({'Loss': loss.item()}) # Evaluate the model on the validation set model.eval() correct = 0 total = 0 val_loss = 0 with torch.no_grad(): for X_batch, y_batch in test_loader: # Move the batch of data to the device X_batch = X_batch.to(device) y_batch = y_batch.to(device) # Compute the model's predictions for this batch of data y_pred = model(X_batch) # Compute the loss loss = criterion(y_pred, y_batch) val_loss += loss.item() # Compute the number of correct predictions _, predicted = torch.max(y_pred.data, 1) total += y_batch.size(0) correct += (predicted == y_batch).sum().item() val_loss /= len(test_loader) accuracy = correct / total print(f'Validation Loss: {val_loss:.4f}, Accuracy: {accuracy:.4f}') if accuracy > best_accuracy: best_accuracy = accuracy torch.save(model.state_dict(), 'best_model.pth') model.train()
How to export a PyTorch model for HuggingFace?
You are using HuggingFace Transformers, you can use: model.save_pretrained("FOLDER_NAME_HERE") After you saved the model, the folder will contain the pytorch_model.bin along with config JSONs.
76389395
76389396
I am trying to find 9 raise to power 19 using numpy. I am using numpy 1.24.3 This is the code I am trying: import numpy as np np.long(9**19) This is the error I am getting: AttributeError: module 'numpy' has no attribute 'long'
AttributeError: module 'numpy' has no attribute 'long'
Sadly, numpy.long was deprecated in numpy 1.20 and it is removed in numpy 1.24 If you wan the result you have to try numpy.longlong import numpy as np np.longlong(9**19) #output 1350851717672992089 https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
76390641
76390728
I have a static web site in the s3 bucket behind the cloudfront distribution. The bucket serves the static site, and the origin is bound to the web site endpoint. I see a couple of pages if they were added before the distribution However, when I upload some new html files, I receive 403 for them. How should I fix this issue? Bucket policy: { "Version": "2008-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [ { "Sid": "AllowCloudFrontServicePrincipal", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": "arn:aws:s3:::a-test-upload/*", "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::xxx:distribution/yyy" } } } ] }
403 for the new pages in the cloudfront distribution
Try to invalidate cloudfront cache. Go to cloud front distribution and click on invalidation enter "/*". Click on create invalidation. If you are trying to access object publicly, then provide public access to s3 bucket objects.
76387987
76388168
Imagine I have the following SQL table: | id | price | start_time | --------------------------- | 1 | 0.1 | 2023-01-01 | | 2 | 0.3 | 2023-03-01 | | 3 | 0.2 | 2023-02-01 | But then I want to query the prices in that table in a way that I can also get the end time as the start time of the next in time column. So, as an example, if I want to query all the entries in the table I would get something like this: | id | price | start_time | end_time | ---------------------------------------- | 1 | 0.1 | 2023-01-01 | 2023-02-01 | // end_time = start_time of the next entry | 3 | 0.2 | 2023-02-01 | 2023-03-01 | | 2 | 0.3 | 2023-03-01 | | But, I would also like to query that table with others filters, as an example, all entries whose prices are lower than 0.25, then I expect: | id | price | start_time | end_time | ---------------------------------------- | 1 | 0.1 | 2023-01-01 | 2023-02-01 | | 3 | 0.2 | 2023-02-01 | 2023-03-01 | end_time = start_time of entry with id 2 So even that the entry with id 2 is filtered out, its start_time is still used as end_time of one of the entries. Is this possible to achieve with one single query? I am bit lost on how to solve this approach without doing multiple queries.
Get the end_time in a query using the start time of the next sorted column
This can be done using the window function lead() to get the which follows the current row : select *, lead(start_time) over (order by start_time) as end_time from mytable With where clause it can be : select * from ( select *, lead(start_time) over (order by start_time) as end_time from mytable ) as s where price < 0.25 Demo here
76388994
76389403
I'm currently working on a Next.js 13.4 project and trying to set up NextAuth using the app/ router. However, I'm encountering a type error that I can't seem to resolve. Here's my route.ts file: import NextAuth, { AuthOptions } from "next-auth"; import DiscordProvider from "next-auth/providers/discord"; export const authOptions: AuthOptions = { providers: [ DiscordProvider({ clientId: process.env.CLIENT_ID as string, clientSecret: process.env.CLIENT_SECRET as string, }), ], session: { strategy: "jwt", }, secret: process.env.NEXTAUTH_SECRET, } const handler = NextAuth(authOptions); export { handler as GET, handler as POST } And here's the error message when running 'npm run build': - info Linting and checking validity of types ...Failed to compile. .next/types/app/api/auth/[...nextauth]/route.ts:8:13 Type error: Type 'OmitWithTag<typeof import("C:/Users/Luk/Documents/Workspace/zerotwo-dash/src/app/api/auth/[...nextauth]/route"), "GET" | "POST" | "HEAD" | "OPTIONS" | "PUT" | "DELETE" | "PATCH" | "config" | ... 6 more ... | "runtime", "">' does not satisfy the constraint '{ [x: string]: never; }'. Property 'authOptions' is incompatible with index signature. Type 'AuthOptions' is not assignable to type 'never'. 6 | 7 | // Check that the entry is a valid entry > 8 | checkFields<Diff<{ | ^ 9 | GET?: Function 10 | HEAD?: Function 11 | OPTIONS?: Function I really have no idea whats happening here... when looking up the 'AuthOptions' on t he github page from nextauth i see nothing wrong with my code.. I would appreciate any insights or suggestions on how to resolve this issue. Thanks in advance!
Next.js 13.4 and NextAuth Type Error: 'AuthOptions' is not assignable to type 'never'
Okay i solved it myself. For anyone having the same issue, i created a new file @/utils/authOptions.ts: import { NextAuthOptions } from "next-auth"; import DiscordProvider from "next-auth/providers/discord"; export const authOptions: NextAuthOptions = { providers: [ DiscordProvider({ clientId: process.env.CLIENT_ID as string, clientSecret: process.env.CLIENT_SECRET as string, }), ], session: { strategy: "jwt", }, secret: process.env.NEXTAUTH_SECRET, } and used it in @/api/auth/[...nextauth]/route.ts: import { authOptions } from "@/utils/authOptions"; import NextAuth from "next-auth/next"; const handler = NextAuth(authOptions); export { handler as GET, handler as POST }; I changed the imports a little (import NextAuth from "next-auth/next") I dont know why this works tbh, but it does... even changing the imports around in my route.ts as it was before wont fix it. Only if its seperated like that...
76390749
76390796
I'm New to React, and I'm trying to learn about React Component right now. but when I create nameList.js, add data in there, and export to app.js, it does not show anything in the browser. I read some answers in Stackoverflow and tried it. but not show anything. App.js import './App.css'; import nameList from './Components/nameList'; function App() { return ( <div className="App"> <div className="App" > <nameList/> </div> </div> ); } export default App; nameList.js in Component Folder import React from 'react'; function nameList() { return ( <div> <h1>Name List</h1> <ul> <li>Stu1</li> <li>Stu2</li> <li>Stu3</li> </ul> </div> ) } export default nameList; I need to get data in nameList.js into App.js
is defined but never used no-unused-var
Ensure that nameList starts with a capital letter so that React knows it's a component and not an HTML element. import React from 'react'; function NameList() { return ( <div> <h1>Name List</h1> <ul> <li>Stu1</li> <li>Stu2</li> <li>Stu3</li> </ul> </div> ) } export default NameList; App.js import './App.css'; import NameList from './Components/nameList'; function App() { return ( <div className="App"> <div className="App" > <NameList/> </div> </div> ); } export default App; https://react.dev/learn/your-first-component#what-the-browser-sees
76387775
76388177
This code is where I modified the data, wherein I update the "checked" to whether "true" or "false". and this code works because when I console.log() it it updates the state. setRolesData((prevState) => { const newState = Array.from(prevState); newState[targetCb]["checked"] = e.target.checked; return newState; }); console.log(rolesData); Image of the result: This code is where I loop thru the data that is updated, but the problem is it doesnt return the same data as the log data if tried for the second time. // get all checked roles const roleArr: any = []; rolesData.map((item, index) => { console.log(item.checked + " ^ " + item.name); }); Image of the Result:
useState() not updated when state is used in a loop
In React, components have a lifecycle that is based around the idea of preserving state. When you update a value in a component we effectively trigger a re-render of the component. So if you have a variable at the top of your component with a certain value and try to update this value inside a function, you'll find that this value gets reset. React is replacing the component including the JavaScript variables inside it. In order to preserve any variables we use useState to persist the state of the component between re-renders. However, in a function, using useState does not save the value to state immediately. useState is an instruction that is sent to React that tells it that when the component re-renders, we need to save that value. If you try to access the value in state before your re-render has begun you will be using the "previous" value instead. In your case, your component will not re-render until the calling function has been completed. So how do you listen for changes? UseEffect is a special hook that takes an array of dependencies as "listeners". When we use useEffect and give it a value in state as a dependency, it's telling it to run the code inside the useEffect if there is a mutation to that value. useEffect(() => { rolesData.map((item, index) => { console.log(item.checked + " ^ " + item.name); }); }, [rolesData]); This code will run every time myState is changed, including on component initialisation. So we can use useEffect to process logic that requires the newly updated stateful values.
76389309
76389422
I am trying to write a Python regex pattern that will allow me to capture words in a given text that have letters separated by the same symbol or space. For example, in the text "This is s u p e r and s.u.p.e.r and sπŸ‘ŒuπŸ‘ŒpπŸ‘ŒeπŸ‘Œr and s!u.p!e.r", my goal is to extract the words "s u p e r", "s.u.p.e.r", and sπŸ‘ŒuπŸ‘ŒpπŸ‘ŒeπŸ‘Œr. However, I want to exclude "s!u.p!e.r" because it does not have the same consistent separating symbol within the word. I'm currently using the following: x="This is s u p e r and s.u.p.e.r and sπŸ‘ŒuπŸ‘ŒpπŸ‘ŒeπŸ‘Œr and s!u.p!e.r" pattern = r"(?:\b\w[^\w\d]){2,}" re.findall(pattern, x) ['s u p e r ', 's.u.p.e.r ', 'sπŸ‘ŒuπŸ‘ŒpπŸ‘ŒeπŸ‘Œr ', 's!u.p!e.'] I'm just curious if it's possible to exclude the cases that do not have the same symbol.
How to capture words with letters separated by a consistent symbol in Python regex?
You may consider using pattern = r"(?<!\S)\w(?=(\W))(?:\1\w)+(?!\S)" results = [m.group() for m in re.finditer(pattern, x)] See the Python demo and the regex demo. import re x="This is s u p e r and s.u.p.e.r and sπŸ‘ŒuπŸ‘ŒpπŸ‘ŒeπŸ‘Œr and s!u.p!e.r" pattern = r"(?<!\S)\w(?=(\W))(?:\1\w)+(?!\S)" print([m.group() for m in re.finditer(pattern, x)]) # => ['s u p e r', 's.u.p.e.r', 'sπŸ‘ŒuπŸ‘ŒpπŸ‘ŒeπŸ‘Œr'] Pattern details (?<!\S) - left-hand whitespace boundary \w - a word char (?=(\W)) - a positive lookahead that requires the next char to e a non-word char capturing it into Group 1 (\1) (?:\1\w)+ - one or more repetitions of the same char as captured in Group 1 and then a single word char (?!\S) - right-hand whitespace boundary
76390683
76390801
I am transforming from Graph API to Graph SDK. How can I transform this API call? https://graph.microsoft.com/v1.0/users/XXX/calendarView?startDateTime=YYY&endDateTime=ZZZ&$expand=extensions($filter=id eq 'NAME') Expand part is missing. How should I add Expand = ??? or do it somehow with Filter or Select? var ret = await graphClient.Users[XXX].CalendarView.GetAsync((requestConfiguration) => { requestConfiguration.QueryParameters.StartDateTime = YYY; requestConfiguration.QueryParameters.EndDateTime = ZZZ; }); Thank you for help.
How to transform Microsoft Graph API query with expand=extensions to SDK code
Thank you @user2250152 . I made a typo. Solution: requestConfiguration.QueryParameters.Expand = new string[] { "extensions($filter=id+eq+'NAME')" };
76387285
76388205
I am new to Flutter development The horizontal list stops scrolling but is able To Scroll while using Axis.vertical. What is expected is Once All the content of List Scrolls then only go to the next Slider. Problem = only displays the list of items that we can visible on a screen unable to scroll horizontally lib-link https://pub.dev/packages/overlapping_panels Code body: OverlappingPanels( right: Builder( builder: (context) { return Text("right"); } ), main: Builder( builder: (context) { var items = ["item1","Item2", "Item3", "Item4", "Item5", "Item6", "Item7"]; return Container( width: double.infinity, height: 200, color: Colors.blue, child: Scrollbar( child: ListView.builder( scrollDirection: Axis.horizontal, itemCount: items.length, itemBuilder: (BuildContext context , int index) { return Container( width: 150, margin: const EdgeInsets.all(8), child: Center( child: Text( items[index], style: const TextStyle( color: Colors.black, fontSize: 18 ), ), ), ); }) ) ); }, ), onSideChange: (side) { setState(() { if (side == RevealSide.main) { // hide something } else if (side == RevealSide.left) { // show something } }); }, )
Unable to add horizontal ListView.builder while using overlapping_panels 0.0.3
I Looked at your code. And also looked at the Library of OverlappingPanels. The thing is, if you wrap you page with Overlapping Panels, It wrap you whole Screen with a Gesture Detector and it listens to a gesture to swipe from right to left. If you are new, I would try something else. Otherwise you can copy their library and make it to your own class 'my_overlapoing_panels.dart like: library overlapping_panels; import 'package:flutter/material.dart'; import 'dart:core'; const double bleedWidth = 20; /// Display sections enum RevealSide { left, right, main } /// Widget to display three view panels with the [MyOverlappingPanels.main] being /// in the center, [MyOverlappingPanels.left] and [MyOverlappingPanels.right] also /// revealing from their respective sides. Just like you will see in the /// Discord mobile app's navigation. class MyOverlappingPanels extends StatefulWidget { /// The left panel final Widget? left; /// The main panel final Widget main; /// The right panel final Widget? right; /// The offset to use to keep the main panel visible when the left or right /// panel is revealed. final double restWidth; final bool allowSidePanel; /// A callback to notify when a panel reveal has completed. final ValueChanged<RevealSide>? onSideChange; const MyOverlappingPanels({ this.left, required this.main, this.right, this.restWidth = 40, this.onSideChange, this.allowSidePanel = true, Key? key, }) : super(key: key); static MyOverlappingPanelsState? of(BuildContext context) { return context.findAncestorStateOfType<MyOverlappingPanelsState>(); } @override State<StatefulWidget> createState() { return MyOverlappingPanelsState(); } } class MyOverlappingPanelsState extends State<MyOverlappingPanels> with TickerProviderStateMixin { AnimationController? controller; double translate = 0; double _calculateGoal(double width, int multiplier) { return (multiplier * width) + (-multiplier * widget.restWidth); } void _onApplyTranslation() { final mediaWidth = MediaQuery.of(context).size.width; final animationController = AnimationController(vsync: this, duration: const Duration(milliseconds: 200)); animationController.addStatusListener((status) { if (status == AnimationStatus.completed) { if (widget.onSideChange != null) { widget.onSideChange!(translate == 0 ? RevealSide.main : (translate > 0 ? RevealSide.left : RevealSide.right)); } animationController.dispose(); } }); if (translate.abs() >= mediaWidth / 2) { final multiplier = (translate > 0 ? 1 : -1); final goal = _calculateGoal(mediaWidth, multiplier); final Tween<double> tween = Tween(begin: translate, end: goal); final animation = tween.animate(animationController); animation.addListener(() { setState(() { translate = animation.value; }); }); } else { final animation = Tween<double>(begin: translate, end: 0).animate(animationController); animation.addListener(() { setState(() { translate = animation.value; }); }); } animationController.forward(); } void reveal(RevealSide direction) { // can only reveal when showing main if (translate != 0) { return; } final mediaWidth = MediaQuery.of(context).size.width; final multiplier = (direction == RevealSide.left ? 1 : -1); final goal = _calculateGoal(mediaWidth, multiplier); final animationController = AnimationController(vsync: this, duration: const Duration(milliseconds: 200)); animationController.addStatusListener((status) { if (status == AnimationStatus.completed) { _onApplyTranslation(); animationController.dispose(); } }); final animation = Tween<double>(begin: translate, end: goal).animate(animationController); animation.addListener(() { setState(() { translate = animation.value; }); }); animationController.forward(); } void onTranslate(double delta) { setState(() { final translate = this.translate + delta; if (translate < 0 && widget.right != null || translate > 0 && widget.left != null) { this.translate = translate; } }); } @override Widget build(BuildContext context) { return Stack(children: [ Offstage( offstage: translate < 0, child: widget.left, ), Offstage( offstage: translate > 0, child: widget.right, ), Transform.translate( offset: Offset(translate, 0), child: widget.main, ), widget.allowSidePanel ? GestureDetector( behavior: HitTestBehavior.translucent, onHorizontalDragUpdate: (details) { onTranslate(details.delta.dx); }, onHorizontalDragEnd: (details) { _onApplyTranslation(); }, ) : SizedBox(), ]); } } Now you can use also the variable 'allowSidePanel' in your code. And If you update your code to: class TestScreen extends StatefulWidget { const TestScreen({super.key}); @override State<TestScreen> createState() => _TestScreenState(); } class _TestScreenState extends State<TestScreen> { ScrollController controller = ScrollController(); bool allowScroll = false; @override void initState() { super.initState(); // Setup the listener. controller.addListener(() { if (controller.position.atEdge) { bool atBegin = controller.position.pixels == 0; if (atBegin) { /// here you can later allow left panel later } else { /// here allow sidepannel setState(() { allowScroll = true; }); } } }); } @override Widget build(BuildContext context) { return Scaffold( body: MyOverlappingPanels( allowSidePanel: allowScroll, right: Builder(builder: (context) { return Text("right"); }), main: Builder( builder: (context) { var items = ["item1", "Item2", "Item3", "Item4", "Item5", "Item6", "Item7"]; return Container( width: double.infinity, height: 200, color: Colors.blue, child: ListView.builder( controller: controller, scrollDirection: Axis.horizontal, itemCount: items.length, itemBuilder: (BuildContext context, int index) { return Container( width: 150, margin: const EdgeInsets.all(8), child: Container( padding: EdgeInsets.all(8), color: Colors.red, child: Center( child: Text( items[index], style: const TextStyle(color: Colors.black, fontSize: 18), ), ), ), ); })); }, ), onSideChange: (side) { setState(() { if (side == RevealSide.main) { /// here deaktivate ssidepannel again allowScroll = false; } else if (side == RevealSide.left) { // show something } }); }, ), ); } } this will work.
76390731
76390810
I'm parsing a list of emails in a text file and I need to parse dates in the email headers. The dates are in a multitude of formats and languages: sexta-feira, 26 de agosto de 2022 16:41 viernes, 26 de agosto de 2022 19:24 2022/08/26 13:30:56 26 de agosto de 2022 13:32:49 BRT Mostly portuguese, spanish, italian and english. What would be the best aproach? I have tried Babel but the date parsing is very basic. For now I only have access to the text files exported from Outlook not the smpt sources.
Parsing multiple date string languages and formats
The dateparser package provides modules to parse localized dates in most string formats. The following snippet successfully retrieves all dates in the given example: import dateparser text_dates = [ "sexta-feira, 26 de agosto de 2022 16:41", "viernes, 26 de agosto de 2022 19:24", "2022/08/26 13:30:56", "26 de agosto de 2022 13:32:49 BRT", ] datetimes = [dateparser.parse(line) for line in text_dates] print(datetimes) >>> [datetime.datetime(2022, 8, 26, 16, 41), datetime.datetime(2022, 8, 26, 19, 24), datetime.datetime(2022, 8, 26, 13, 30, 56), datetime.datetime(2022, 8, 26, 13, 32, 49, tzinfo=<StaticTzInfo 'BRT'>)]
76389259
76389434
Consider the MWE below WITH samp AS ( SELECT '2023-01-01' AS day, 1 AS spent UNION ALL SELECT '2023-01-02' AS day, 2 AS spent UNION ALL SELECT '2023-01-03' AS day, 3 AS spent ) SELECT day, spent , ARRAY_AGG(spent) OVER(ORDER BY day BETWEEN '2023-01-02' AND '2023-01-03') ss FROM samp ORDER BY day I cannot figure out what the order by clause is doing here. I'd expect to restrict the entries to those of the selected dates, but dates outside it also have a contribution? E.g., outcome of the above day spent ss '2023-01-01' 1 [1] '2023-01-02' 2 [1,2,3] '2023-01-03' 3 [1,2,3]
Window function, order by clause, between operator
The clause day between '2023-01-02' and '2023-01-03' is a boolean expression, and will only evaluate to two possible values, true or false (1 or 0). Therefore, your window function array_agg(spent) will compute using an order where dates other than 2023-01-02 and 2023-01-03 will be ordered first, followed by these dates next. Here is your updated output showing the ordering logic: day spent ss order (day between ...) '2023-01-01' 1 [1] 0 '2023-01-02' 2 [1,2,3] 1 '2023-01-03' 3 [1,2,3] 1
76389304
76389435
The username is in column one and PID in column two of ps gaux, so I have: ps gaux | awk '{print $2;}' | while read line ; do grep -i umask /proc/$line/status ; done but is there a way to print the username as well?
How do I print the user and umask for all running processes?
I hope this helps ps gaux | awk '{printf $1 " " ; system("grep Umask /proc/"$2"/status | tr -dc [:digit:]"); printf "\n"}' Explanation: get output from ps print first column (username) and space run the grep and remove everything except the actual umask, which is a number (awk does not print the command output, it gets just printed directly from the subshells stdout) print a new line
76383190
76390828
I am getting this error: IntegrityError at /register/ null value in column "total_daily_mission_progress" violates not-null constraint DETAIL: Failing row contains (363, 0, 374, free, 0, null, unranked, 0, , [email protected], 0, f, [], 0, {}, {}, t, null, null, null, null, null, {}, null, null, No phone number set, This user has not set a description yet., /static/images/profilepictures/defaultavatar.png, {}, {}, {}, {}, {}, 0). However, the column total_daily_mission_progress no longer exists in my UserDetail model. I deleted it a while ago and migrated. However, this issue comes up every time I try to create a new UserDetail model. Why is this occuring? I don't have the total_daily_mission_progress anywhere in my code. And how can I fix it? EDIT: Here is my output after running python3 manage.py showmigrations --verbosity 2 [X] 0001_initial (applied at 2022-12-29 19:39:10) [X] 0002_logentry_remove_auto_add (applied at 2022-12-29 19:39:10) [X] 0003_logentry_add_action_flag_choices (applied at 2022-12-29 19:39:10) auth [X] 0001_initial (applied at 2022-12-29 19:39:09) [X] 0002_alter_permission_name_max_length (applied at 2022-12-29 19:39:11) [X] 0003_alter_user_email_max_length (applied at 2022-12-29 19:39:11) [X] 0004_alter_user_username_opts (applied at 2022-12-29 19:39:11) [X] 0005_alter_user_last_login_null (applied at 2022-12-29 19:39:12) [X] 0006_require_contenttypes_0002 (applied at 2022-12-29 19:39:12) [X] 0007_alter_validators_add_error_messages (applied at 2022-12-29 19:39:12) [X] 0008_alter_user_username_max_length (applied at 2022-12-29 19:39:12) [X] 0009_alter_user_last_name_max_length (applied at 2022-12-29 19:39:13) [X] 0010_alter_group_name_max_length (applied at 2022-12-29 19:39:13) [X] 0011_update_proxy_permissions (applied at 2022-12-29 19:39:13) [X] 0012_alter_user_first_name_max_length (applied at 2022-12-29 19:39:13) codera_main [X] 0001_initial (applied at 2022-12-29 19:39:14) [X] 0002_achievement_reward_alter_achievement_rarity (applied at 2022-12-29 19:39:15) [X] 0003_achievement_num (applied at 2022-12-29 19:39:15) [X] 0004_alter_achievement_name_alter_achievement_num (applied at 2022-12-29 19:39:15) [X] 0005_alter_achievement_description_alter_achievement_name (applied at 2022-12-29 19:39:16) [X] 0006_userdetail (applied at 2022-12-29 19:39:17) [X] 0007_userdetail_skilllevel (applied at 2022-12-29 19:39:17) [X] 0008_userdetail_plan (applied at 2022-12-29 19:39:17) [X] 0009_userdetail_friends_alter_userdetail_skilllevel (applied at 2022-12-29 19:39:17) [X] 0010_alter_userdetail_friends (applied at 2022-12-29 19:39:18) [X] 0011_alter_userdetail_friends (applied at 2022-12-29 19:39:18) [X] 0012_alter_userdetail_friends (applied at 2022-12-29 19:39:18) [X] 0013_userstat (applied at 2022-12-29 19:39:19) [X] 0014_usercode (applied at 2022-12-29 19:39:19) [X] 0015_userreward (applied at 2022-12-29 19:39:20) [X] 0016_remove_userstat_ribbons_remove_userstat_shards_and_more (applied at 2022-12-29 19:39:20) [X] 0017_alter_userdetail_totallearningtime (applied at 2022-12-29 19:39:21) [X] 0018_alter_userdetail_totallearningtime (applied at 2022-12-29 19:39:21) [X] 0019_alter_userdetail_totallearningtime (applied at 2022-12-29 19:39:22) [X] 0020_alter_userdetail_totallearningtime (applied at 2022-12-29 19:39:22) [X] 0021_userstat_last_login (applied at 2022-12-29 19:39:23) [X] 0022_userstat_monthly_streaks (applied at 2022-12-29 19:39:23) [X] 0023_userdetail_user_spent (applied at 2022-12-29 19:39:23) [X] 0024_remove_userdetail_user_spent_and_more (applied at 2022-12-29 19:39:24) [X] 0025_alter_userdetail_totallearningtime (applied at 2022-12-29 19:39:24) [X] 0026_alter_usercode_levelscompleted (applied at 2022-12-29 19:39:25) [X] 0027_remove_achievement_rarity_remove_achievement_reward_and_more (applied at 2022-12-29 19:39:25) [X] 0028_achievement_achive_date (applied at 2022-12-29 19:39:25) [X] 0029_alter_achievement_num (applied at 2022-12-29 19:39:26) [X] 0030_rename_name_achievement_title (applied at 2022-12-29 19:39:26) [X] 0031_delete_userreward (applied at 2022-12-29 19:39:26) [X] 0032_userdetail_profilepicture (applied at 2022-12-29 19:39:27) [X] 0033_alter_userdetail_profilepicture (applied at 2022-12-29 19:39:27) [X] 0034_alter_userdetail_profilepicture (applied at 2022-12-29 19:39:27) [X] 0035_alter_userdetail_profilepicture (applied at 2022-12-29 19:39:27) [X] 0036_userstat_badges (applied at 2022-12-29 19:39:28) [X] 0037_remove_userstat_badges_userreward (applied at 2022-12-29 19:39:28) [X] 0038_remove_userdetail_profilepicture (applied at 2023-01-16 15:38:38) [X] 0039_remove_userdetail_friends (applied at 2023-01-16 15:40:34) [X] 0040_emailverified (applied at 2023-02-18 02:18:32) [X] 0041_remove_usercode_levelscompleted_and_more (applied at 2023-02-18 02:18:33) [X] 0042_userdetail_isverified (applied at 2023-02-18 02:18:33) [X] 0043_remove_usercode_user_remove_userreward_user_and_more (applied at 2023-02-18 02:40:26) [X] 0044_alter_userdetail_levelscompleted (applied at 2023-02-18 02:40:28) [X] 0045_userdetail_xp_alter_userdetail_league (applied at 2023-02-18 02:40:28) [X] 0046_avatar (applied at 2023-02-18 02:40:29) [X] 0047_alter_avatar_moves_unlocked (applied at 2023-02-18 02:40:29) [X] 0048_alter_avatar_moves_unlocked (applied at 2023-02-18 02:40:29) [X] 0049_userdetail_yesturday_diamonds (applied at 2023-02-18 02:40:29) [X] 0050_remove_userdetail_yesturday_diamonds_and_more (applied at 2023-02-18 02:40:30) [X] 0051_alter_userdetail_dimaond_progress (applied at 2023-02-18 02:40:30) [X] 0052_rename_dimaond_progress_userdetail_diamond_progress (applied at 2023-02-18 02:40:30) [X] 0053_userdetail_medal_progress (applied at 2023-02-18 02:40:30) [X] 0054_alter_userdetail_totallearningtime_badge (applied at 2023-02-18 02:40:31) [X] 0055_badge_image (applied at 2023-02-18 02:40:32) [X] 0056_badge_popup_image (applied at 2023-02-18 02:40:32) [X] 0057_alter_badge_badge_date_and_more (applied at 2023-02-18 02:40:32) [X] 0058_alter_badge_badge_date (applied at 2023-02-18 02:40:33) [X] 0059_alter_badge_badge_date (applied at 2023-02-18 02:40:33) [X] 0060_alter_achievement_user_alter_badge_user (applied at 2023-02-18 02:40:34) [X] 0061_userdetail_iscurrentlyactive (applied at 2023-02-18 02:40:34) [X] 0062_avatar_accuracy_avatar_speed_alter_avatar_character_and_more (applied at 2023-02-18 02:40:35) [X] 0063_avatar_rank (applied at 2023-02-18 02:40:35) [X] 0064_alter_avatar_moves_unlocked_alter_avatar_rank (applied at 2023-02-18 02:40:35) [X] 0065_alter_avatar_moves_unlocked (applied at 2023-02-18 02:40:36) [X] 0066_avatar_strenght (applied at 2023-02-18 02:40:36) [X] 0067_rename_strenght_avatar_strength (applied at 2023-02-18 02:40:37) [X] 0068_userdetail_plan_purchase_date (applied at 2023-02-18 02:40:37) [X] 0069_userdetail_checkout_id (applied at 2023-02-18 02:40:37) [X] 0070_alter_avatar_accuracy_alter_avatar_energy_and_more (applied at 2023-02-20 17:13:05) [X] 0071_guideemail (applied at 2023-02-22 23:52:59) [X] 0072_dailymission_userdetail_todays_daily_mission_and_more (applied at 2023-04-20 22:14:25) [X] 0074_alter_userdetail_totallearningtime (applied at 2023-04-24 13:54:24) [X] 0075_chest (applied at 2023-04-24 17:10:00) [X] 0076_userdetail_friends_list (applied at 2023-05-01 14:30:12) [X] 0077_friendrequest (applied at 2023-05-01 14:37:47) [X] 0078_friendrequest_date_sent (applied at 2023-05-01 22:54:00) [X] 0079_alter_friendrequest_date_sent (applied at 2023-05-01 23:00:54) [X] 0080_alter_friendrequest_date_sent (applied at 2023-05-02 01:07:36) [X] 0081_alter_userdetail_friends_list (applied at 2023-05-02 23:57:12) [X] 0082_remove_userdetail_friends_list (applied at 2023-05-02 23:59:09) [X] 0083_userdetail_friends_list (applied at 2023-05-02 23:59:27) [X] 0084_userdetail_first_name_userdetail_last_name_and_more (applied at 2023-05-09 20:39:01) [X] 0085_userdetail_description (applied at 2023-05-09 20:43:14) [X] 0086_userdetail_profile_picture (applied at 2023-05-09 21:41:22) [X] 0087_alter_userdetail_phone_number (applied at 2023-05-09 22:48:31) [X] 0088_alter_userdetail_profile_picture (applied at 2023-05-09 22:59:01) [X] 0089_remove_userdetail_profile_picture (applied at 2023-05-09 23:00:03) [X] 0090_userdetail_profile_picture (applied at 2023-05-09 23:00:13) [X] 0091_alter_userdetail_profile_picture (applied at 2023-05-09 23:08:33) [X] 0092_remove_userdetail_profile_picture (applied at 2023-05-09 23:09:17) [X] 0093_userdetail_profile_picture (applied at 2023-05-09 23:09:55) [X] 0094_remove_userdetail_profile_picture (applied at 2023-05-10 00:31:07) [X] 0095_userdetail_profile_picture (applied at 2023-05-10 00:31:20) [X] 0096_remove_userdetail_profile_picture (applied at 2023-05-10 00:34:04) [X] 0097_userdetail_profile_picture (applied at 2023-05-10 00:34:17) [X] 0098_remove_userdetail_profile_picture (applied at 2023-05-10 00:37:46) [X] 0099_userdetail_profile_picture (applied at 2023-05-10 00:41:59) [X] 0100_alter_userdetail_profile_picture (applied at 2023-05-10 00:46:03) [X] 0101_alter_userdetail_profile_picture (applied at 2023-05-10 00:48:49) [X] 0102_remove_userdetail_profile_picture (applied at 2023-05-10 00:49:22) [X] 0103_userdetail_profile_picture (applied at 2023-05-10 00:50:02) [X] 0104_alter_userdetail_profile_picture (applied at 2023-05-10 14:32:59) [X] 0105_alter_userdetail_league (applied at 2023-05-21 23:31:23) [X] 0106_userdetail_xp_progress_and_more (applied at 2023-05-28 23:28:41) [X] 0107_remove_userdetail_todays_daily_mission_completed_and_more (applied at 2023-05-29 16:14:45) [X] 0108_remove_userdetail_todays_daily_mission_and_more (applied at 2023-05-29 16:14:45) [X] 0109_userdetail_todaysdailymission_and_more (applied at 2023-06-01 14:39:33) [X] 0110_rename_todaysdailymissioncompelted_userdetail_todaysdailymissioncompleted (applied at 2023-06-01 14:39:33) [X] 0111_mainmission_alter_dailymission_num_and_more (applied at 2023-06-01 14:39:34) contenttypes [X] 0001_initial (applied at 2022-12-29 19:39:05) [X] 0002_remove_content_type_name (applied at 2022-12-29 19:39:11) sessions [X] 0001_initial (applied at 2022-12-29 19:39:29)
Django - INTEGRITY ERROR on column that no longer exists
Ended up figuring it out, ended up using python3 manage.py dbshell and runnning the following command: ALTER TABLE_NAME FROM USERMODAL
76387798
76388214
How to disable italics on static methods called inside AndroidStudio IDE? I know, it's a personal preference question but is there a way to disable italics? and only keep the colour coding?
How to change color scheme for static methods in IntelliJ IDEA / Android Studio
Settings/Preferences (on macOS) | Editor | Color Scheme | Java | Methods | Static method Additional trick: to easily find the corresponding color scheme settings do the following: Put the cursor at the needed element you need to change Press Shift twice Type Just to Colors and Fonts and press Enter. Then select the corresponding option
76389341
76389488
I am trying to do some text processing and was interested to know if I can have a common/unified regex for a certain pattern. The pattern of interest is strings that ends with {string}_{i} where i is a number, on the second column of test.csv. Once the regex is matched, I wish to replace it with {string}[i]. For now the python script works as expected for the strings for which I explicitly mention the regex pattern. I want to have a more generic regex pattern that will match all the strings that have {string}_{i} instead of writing a regex for all the patterns (which is not scalable). input test.csv bom_a14 , COMP_NUM_0 bom_a17 , COMP_NUM_2 bom_a27 , COMP_NUM_11 bom_a35 , FUNC_1V8_OLED_OUT_7 bom_a38 , FUNC_1V8_OLED_OUT_9 bom_a39 , FUNC_1V8_OLED_OUT_10 bom_a46 , CAP_4 bom_a47 , CAP_3 bom_a48 , CAP_6 test.py import csv import re # Match the values in the first column of the second file with the first file's data with open('test.csv', 'r') as file2: reader = csv.reader(file2) for row in reader: row_1=row[1] # for matching COMP_NUM_{X} match_data = re.match(r'([A-Z]+)_([A-Z]+)_(\d+)',row_1.strip()) # for matching FUNC_1V8_OLED_OUT_{X} match_data2 = re.match(r'([A-Z]+)_([A-Z0-9]+)_([A-Z]+)_([A-Z]+)_(\d+)',row_1.strip()) # if match found, reformat the data if match_data: new_row_1 = match_data.group(1) +'_'+ match_data.group(2)+ '[' + match_data.group(3) + ']' elif match_data2: new_row_1 = match_data2.group(1) +'_'+ match_data2.group(2)+ '_'+ match_data2.group(3)+'_'+ match_data2.group(4)+'[' + match_data2.group(5) + ']' else: new_row_1 = row_1 print new_row_1 output COMP_NUM[0] COMP_NUM[2] COMP_NUM[11] FUNC_1V8_OLED_OUT[7] FUNC_1V8_OLED_OUT[9] FUNC_1V8_OLED_OUT[10] CAP_4 CAP_3 CAP_6 expected output COMP_NUM[0] COMP_NUM[2] COMP_NUM[11] FUNC_1V8_OLED_OUT[7] FUNC_1V8_OLED_OUT[9] FUNC_1V8_OLED_OUT[10] CAP[4] CAP[3] CAP[6]
common/unified regex for a set of pattern
I would use sub with a single generic pattern : with open("test.csv", "r") as file2: for row in csv.reader(file2): s = re.sub(r"(.+)_(\d+)$", r"\1[\2]", row[-1].strip()) print(s) Regex : [demo] Output : COMP_NUM[0] COMP_NUM[2] COMP_NUM[11] FUNC_1V8_OLED_OUT[7] FUNC_1V8_OLED_OUT[9] FUNC_1V8_OLED_OUT[10] CAP[4] CAP[3] CAP[6]
76387936
76388264
total nube in PL/PGSQL. I would like to define an array of strings and then used that in my SELECT... WHERE In statement but can't seem to get it to work, help appreciated. DO $$ DECLARE testArray varchar[] := array['john','lisa']; ids integer[]; BEGIN ids = array(select id from tableA where name in (testArray)); -- this works ids = array(select id from tableA where name in ('john','lisa')); END $$;
How to use array variable in my sql statement in plpgsql
You can use any near of testarray. DO $$ DECLARE testArray varchar[] := array['john','lisa']; ids integer[]; BEGIN ids = array(select id from tableA where name = any(testArray)); -- this works ids = array(select id from tableA where name in ('john','lisa')); END $$;
76390640
76390875
I have two 3D masked arrays (netCDF4 files output from climate model) that I want to add together. I followed this thread and got the following (simplified) code out of it: import numpy as np from netCDF4 import Dataset from operator import and_ from numpy.ma.core import MaskedArray with Dataset(dir + 'V10.nc') as file_V10: with Dataset(dir + 'U10.nc') as file_U10: raw_V10 = file_V10.variables['V10'][744 : 9503, :, :] ** 2 raw_U10 = file_U10.variables['U10'][744 : 9503, :, :] ** 2 10m_raw_squared = MaskedArray(raw_V10[:].data + raw_U10[:].data, mask=list(map(and_,raw_V10.mask, raw_U10.mask))) However, I get the error message: Traceback (most recent call last): File "code.py", line 92, in <module> 10m_raw_squared = MaskedArray(raw_V10[:].data + raw_U10[:].data, mask=list(map(and_,raw_V10.mask, raw_U10.mask))) TypeError: 'numpy.bool_' object is not iterable If I try changing the mask from boolean to string (in order to make it iterable) by adding mask.astype('str'), I get this error message: Traceback (most recent call last): File "code.py", line 92, in <module> 10m_raw_squared = MaskedArray(raw_V10[:].data + raw_U10[:].data, mask=list(map(and_,raw_V10.mask.astype('str'),raw_U10.mask.astype('str')))) TypeError: unsupported operand type(s) for &: 'str' and 'str' I have also tried to add the arrays together using a for-loop, but somehow couldn't get that to work without losing a dimension and the majority of the array elements of the data. How can I add my two datasets together? Edit: I called for the class of the dataset and got the following output: <class 'numpy.ma.core.MaskedArray'>
Adding 3D masked arrays results in TypeError: 'numpy.bool_' object is not iterable
You can use np.logical_and to create the mask. with Dataset(dir + 'V10.nc') as file_V10: with Dataset(dir + 'U10.nc') as file_U10: raw_V10 = file_V10.variables['V10'][744 : 9503, :, :] ** 2 raw_U10 = file_U10.variables['U10'][744 : 9503, :, :] ** 2 mask = np.logical_and(raw_V10.mask, raw_U10.mask) 10m_raw_squared = MaskedArray(raw_V10[:].data + raw_U10[:].data, mask=mask)
76389299
76389493
I have this SELECT statement in SQL Server: select testname as 'Test', tests_morning , tests_evening , Date from labtests_hajj left join departments_statistics on departments_statistics.test_id = labtests_hajj.testid inner join departments on labtests_hajj.dept_id = departments.dept_id The output now like this: Test tests_morning tests_evening Date CBC null null null CALCIUM null null null SODIUMN null null null How can I get the output as script and add the date also depends on date range for example I need the output for 3 days to be like this : Test tests_morning tests_evening Date CBC null null 01/06/2023 CALCIUM null null 01/06/2023 SODIUMN null null 01/06/2023 CBC null null 02/06/2023 CALCIUM null null 02/06/2023 SODIUMN null null 02/06/2023 CBC null null 03/06/2023 CALCIUM null null 03/06/2023 SODIUMN null null 03/06/2023 How can I do the select and put 2 dates and show the output like thie?
Select the data and generate the SELECT for each day from date to date?
There are have two ways use DimDate or calendar table you create a date table yourself with CTE With DimDate declare @StartData date='2023-02-01' declare @EndData date='2023-02-06' ;with _t as ( select testname as 'Test', tests_morning , tests_evening , Date from labtests_hajj left join departments_statistics on departments_statistics.test_id = labtests_hajj.testid inner join departments on labtests_hajj.dept_id = departments.dept_id ) select a.Test ,a.tests_morning ,a.tests_evening ,s.Date_ as Date from _t cross apply (select * from DimDate where date_ between @StartData and @EndData )s If your date range is consecutive, you can create the desired data by setting the start and end dates with CTE. declare @StartData date='2023-02-01' declare @EndData date='2023-02-06' ;WITH List as ( SELECT @StartData as Date_ UNION ALL SELECT DATEADD(day, 1, Date_) as 'MonthStart' FROM List where Date_<= @EndData ), _t as ( select testname as 'Test', tests_morning , tests_evening , Date from labtests_hajj left join departments_statistics on departments_statistics.test_id = labtests_hajj.testid inner join departments on labtests_hajj.dept_id = departments.dept_id ) select a.Test ,a.tests_morning ,a.tests_evening ,s.Date_ as Date from _t a cross apply (select * from List)s
76389280
76389522
Why this works sed -n '242p' /usr/local/lib/python3.6/site-packages/keras/models.py model_config = json.loads(model_config.decode('utf-8')) sed -i "242s/.decode('utf-8')//" /usr/local/lib/python3.6/site-packages/keras/models.py sed -n '242p' /usr/local/lib/python3.6/site-packages/keras/models.py model_config = json.loads(model_config) and this doesnΒ΄t sed -n '3328p' /usr/local/lib/python3.6/site-packages/keras/engine/topology.py original_keras_version = f.attrs['keras_version'].decode('utf8') sed -i "3328s/.decode('utf-8')//" /usr/local/lib/python3.6/site-packages/keras/engine/topology.py sed -n '3328p' /usr/local/lib/python3.6/site-packages/keras/engine/topology.py original_keras_version = f.attrs['keras_version'].decode('utf8') Why I cannot delete the second .decode('utf8'). It is because it is at the end of the string. I could another approach but I would like to be consistent with the code. I don't get any errors so I don't know what to do although I have been working around and looking for the answer on Internet.
sed doesn't replace a string when this is at the end of the line
There's an important one character difference: utf8 utf-8
76388209
76388265
I am trying to build a mock factory, like this: public Mock<T> CreateMock<T>(SomeParams someParams) where T : IMyInterface { Mock<T> result = new Mock<T>(); ... } However, I am getting the compiler error CS0452: The type 'T' must be a reference type in order to use it as parameter 'T' in the generic type or method 'Mock'. Yet, when I tried this, it worked just fine: public Mock<IMyInterface> CreateMock(SomeParams someParams) { Mock<IMyInterface> result = new Mock<IMyInterface>(); ... } I don't understand, why I am getting the compiler error, when the code is functionally the same? Is there any simple way to workaround this? I would like to avoid the second approach, as it would require considerable changes in our testing infrastructure.
Cannot Mock but can Mock?
I don't understand, why I am getting the compiler error, when the code is functionally the same? Consider this case: public struct Awkward : IMyInterface { } ... Mock<Awkward> mock = MockFactory.CreateMock<Awkward>(); That satisfies the constraint you've put on CreateMock - but Awkward is a value type. Mock<T> requires T to be a reference type. You need to constrain the T type parameter in your method to be a reference type that implements the interface: public Mock<T> CreateMock<T>(SomeParams someParams) where T : class, IMyInterface
76390665
76390920
import pandas as pd from sklearn.cluster import KMeans dataset = pd.read_csv("smogon.csv") dataset.drop(["url", 'texto'], axis=1, inplace=True) km = KMeans(n_clusters=1, n_init='auto') cluster = km.fit_predict(dataset[["moves"]]) dataset["Grupo"] = cluster print(dataset) it shows (cluster = km.fit_predict(dataset[["moves"]]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ also it shows ( could not convert string to float) im doing a project with kmeans and pandas and when im trying to do the clustering with the csv file it shows an error but when i do it with another csv file, it doesnt show any problem running it.
Does an error existe on the next code? it shows error on line 7
Try to use LabelEncoder to convert your column moves as numeric: import pandas as pd from sklearn.cluster import KMeans from sklearn.preprocessing import LabelEncoder dataset = pd.read_csv("smogon.csv") dataset.drop(["url", 'texto'], axis=1, inplace=True) # It does not make sense to have only one cluster... # There is many documentation to find the best number of clusters (elbow method, ...) km = KMeans(n_clusters=3, n_init='auto') lbe = LabelEncoder() dataset['moves_num'] = lbe.fit_transform(dataset['moves']) cluster = km.fit_predict(dataset[['moves_num']]) dataset["Grupo"] = cluster print(dataset)
76380543
76388271
I have created a document upload site with asp.net core web app's, and I have encountered a small bug but I'm not sure how to fix it. On my site, you first create a 'file' like so: It then appears in a list like so: And when you press upload attachment, it passes the id from the previous table to ensure it uploads to the correct file. The code behind the upload page is as below, and the error is if you press upload before choosing a file, it does a page refresh to display an error and then the ID passed through before has been lost, so myInv ends up being null. using FarmersPortal.Data; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.RazorPages; using File = FarmersPortal.Data.File; namespace FarmersPortal.Pages.Admin { [Authorize(Roles ="Admin")] public class UploadModel : PageModel { private readonly filedbContext _context; public UploadModel(filedbContext context) { _context = context; } public int? myID { get; set; } [BindProperty] public IFormFile file { get; set; } [BindProperty] public int? ID { get; set; } public void OnGet(int? id) { myID = id; } [BindProperty] public File File { get; set; } public async Task<IActionResult> OnPostAsync() { if (file != null) { if (file.Length > 0 && file.Length < 300000) { var myInv = _context.Files.FirstOrDefault(x => x.Id == ID); var date = DateTime.Today; using (var target = new MemoryStream()) { file.CopyTo(target); myInv.UploadDate = date; myInv.Attachment = target.ToArray(); } if (myInv == null) { return NotFound(); } else { File = myInv; } _context.Files.Update(myInv); await _context.SaveChangesAsync(); } } if (File.FileType == "Purchase Order") { return RedirectToPage("./PurchaseOrders"); } else if (File.FileType == "Remittance") { return RedirectToPage("./Remittance"); } else if (File.FileType == "Haulage Self Bill") { return RedirectToPage("./HaulageSelfBill"); } else if (File.FileType == "Growers Return") { return RedirectToPage("./GrowersReturn"); } return Page(); } } } I am not sure how to work my way around this, any ideas? @page @model FarmersPortal.Pages.Admin.UploadModel @{ } <h1 style="color:white">Upload File</h1> <style> body { background-image: url("http://10.48.1.215/PORTAL/hero-range-1.jpg"); height: 100%; background-position: center; background-repeat: no-repeat; background-size: cover; } </style> <hr /> <div class="row"> <div class="col-md-4"> <form method="post" enctype="multipart/form-data"> <div class="form-group"> <div class="col-md-10"> <p style="color:white">Upload file</p> <input type="hidden" asp-for="@Model.ID" value="@Model.myID" /> <input asp-for="file" class="form-control" accept=".pdf" type="file" /> <span asp-validation-for="file" class="text-white"></span> </div> </div> <div class="form-group"> <div class="col-md-10"> <input class="btn btn-success" type="submit" value="Upload" /> </div> </div> </form> </div> </div> <div> <a asp-page="Index">Back to List</a> </div> Above is the front end code.
Detecting page refresh c#
You can custom the error message [BindProperty] [Required(ErrorMessage ="You must select a file before upload this form")] public IFormFile file { get; set; } And you also need to add Jquery validation library to your view: @section Scripts { @{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); } } Then when user don't select any file and click the upload button, View will show error message and stop uploading the form.
76389514
76389555
I need to detect if a string contains a specific word like "Hello". Hello -> yes HhhheeeEEEElllLLLLoooOOOO -> Yes hell0 -> yes h e l l o / h. e .l .L . o -> Yes h@@$$eee///LLL!!!ooo -> yes My attempt: let string = "h@@$$eee///LLL!!!ooo"; if (string.match(/\bhello\b/i)) { console.log("Yes"); } else { console.log("No"); }
Regex to detect deformed words
You can use .* between the chars, [o0] for o and zero and the flag i for case-insensitive matches: [ "h@@$$eee///LLL!!!ooo", "Hello", "HhhheeeEEEElllLLLLoooOOOO", "hell0", "h e l l o / h. e .l .L . o", "h@@$$eee///LLL!!!ooo" ].forEach(string => { if (string.match(/h.*e.*l.*l.*[o0]/i)) { console.log("Yes"); } else { console.log("No"); } }); You can use a dynamic regular expression: const input = "Hello"; const re = new RegExp(input.toLowerCase().split('').map(c => c === 'o' ? '[o0]' : c).join('.*'), 'i'); [ "h@@$$eee///LLL!!!ooo", "Hello", "HhhheeeEEEElllLLLLoooOOOO", "hell0", "h e l l o / h. e .l .L . o", "h@@$$eee///LLL!!!ooo" ].forEach(string => { if (string.match(re)) { console.log("Yes"); } else { console.log("No"); } }); I used an additional toLowerCase to simplify the replacement of o => [o0].
76389515
76389565
This is my df: df <- data.frame(id=as.integer(c(1:6)), code=as.character(c("C410", "D486", "D485", "D501", "D600", "D899"))) df id code 1 1 C410 2 2 D486 3 3 D485 4 4 D501 5 5 D600 6 6 D899 I want to attribute causes to each id depending on the range they fall into in column 2. For this, I use a ifelse statement: df$cause <- ifelse(df$code >= "C00" & df$code <= "D48", "cause 1", ifelse(df$code >= "D50" & df$code <= "D89", "cause 2", NA)) Issue: the algorithm does not capture values above the end of each range (until the maximum possible value) df id code cause 1 1 C410 cause 1 2 2 D486 <NA> 3 3 D485 <NA> 4 4 D501 cause 2 5 5 D600 cause 2 6 6 D899 <NA> Desired output: df id code cause 1 1 C410 cause 1 2 2 D486 cause 1 3 3 D485 cause 1 4 4 D501 cause 2 5 5 D600 cause 2 6 6 D899 cause 2
Retrieve every value between an alphanumeric range in R using ifelse
Yo need to add a third digit: df$cause <- ifelse(df$code >= "C000" & df$code <= "D489", "cause 1", ifelse(df$code >= "D500" & df$code <= "D899", "cause 2", NA)) > df id code cause 1 1 C410 cause 1 2 2 D486 cause 1 3 3 D485 cause 1 4 4 D501 cause 2 5 5 D600 cause 2 6 6 D899 cause 2
76390791
76390956
I am trying to join an underscore into a string, but when running the code I get 4 times the character i used str = potetoBox for indx in range (len(str)): if str[indx].isupper(): #split before indx and input an underscore str = ''.join((str[:indx],'_',str[indx:])) print(str) I have tried to change characters and change the order of the code. I expected to get only one underscore.
When using join in a string, I get too many characters in the string
Another solution without re that doesn't build a list: result = "" for c in my_str: result += "_" * c.isupper() + c
76389352
76389617
I have byte[] like this, the length of byte[] is 16, and I want to change some value of an item and add this to a new byte[]. For example, the value of byte[7] is 13, I want to change this value with a new value. After that, add into a new byte, the value of byte[7] will be changed plus one more unit.
How to copy from byte[] to new byte[] with some changed element item?
You should perform the operations the other way round. First copy then change the value in the copied data. This way the original will stay unchanged. Efficient: var original = new byte[]{1,2,3}; var result = new byte[original.Length]; Array.Copy(original, result, original.Length); //first make a copy result[2] = 42; //then set your new value //result is now [1, 2, 42] Simple with LINQ: //do not forget using System.Linq; var original = new byte[]{1,2,3}; var result = original.ToArray(); //first copy result[2] = 42; //then set your new value //result is now [1, 2, 42] For small data there will be no difference in performance. For large arrays the direct copy should perform a bit better. See this answer for further alternatives regarding the copy part.
76388142
76388285
I want to delete the object where the name is "Sleep". The code I am using: const listName = "Holiday; const item = new Item({ name: "Sleep" }); User.updateOne({username : req.user.username}, {$pull:{"lists.$[updateList].items" : item}}, { "arrayFilters": [ {"updateList.name" : listName} ] }).exec().then(function(){ console.log("Deleted successfully"); res.redirect("/list-"+listName); }) The mongodb object: "_id" : ObjectId("64797ebc9e84ed9d8be3ea54"), "username" : "[email protected]", "lists" : [ { "name" : "Holiday", "items" : [ { "name" : "Welcome to your todo-list!", "_id" : ObjectId("647988267f3ddfc2982f7d77") }, { "name" : "Click + to add another item.", "_id" : ObjectId("647988267f3ddfc2982f7d78") }, { "name" : "<-- Click this to delete an item.", "_id" : ObjectId("647988267f3ddfc2982f7d79") }, { "name" : "Sleep", "_id" : ObjectId("64799279c3da415dc4ce7574") }, { "name" : "WakeUp", "_id" : ObjectId("6479930e6d49e494aad1dffa") } ], "_id" : ObjectId("647988357f3ddfc2982f7d85") } ] } It seems there is some problem with the updateOne and $pull: attributes but I can't figure out what.
I am trying to remove an item from my mongoDb object which is an object with an array of nested objects. But this code is not working
Try with const listName = 'Holiday'; User.updateOne( { username: req.user.username, 'lists.name': listName }, { $pull: { 'lists.$[].items': { name: 'Sleep ' } } } ) .exec() .then(function () { console.log('Deleted successfully'); res.redirect('/list-' + listName); });
76390964
76391015
I'm trying to display all the questions from this data set however it's only working sometimes and then other times I receive 'questions is undefined'. Why am I receiving this error? const [questions, setQuestions] = useState<any>(); const [question, setQuestion] = useState<string>() const [answers, setAnswers] = useState<[]>() useEffect(() => { fetch("/environment_questions") .then((response) => response.json()) .then((data) => setQuestions(data)); }, []); useEffect(() => { if (questions.length > 0) { for (let i = 0, l = questions.length; i < l; i++) { setQuestion(questions[i].question) setAnswers(questions[i].answers) } } }, [questions]) return ( <p>{JSON.stringify(question)}</p> <p>{JSON.stringify(answers)}</p> ) } Data I'm trying to access: [{"id":1,"question":"...":[{"id":1,"answer":"..."},{"id":2,"answer":"..."}]},{"id":2,"question":"...","answers":[{"id":1,"answer":""},{"id":2,"answer":""},{"id":3,"answer":""}]} ...}]
questions is undefined in useEffect
import React, { useEffect, useState } from 'react' const Component = () => { const [questions, setQuestions] = useState<any>() const [question, setQuestion] = useState<string>() const [answers, setAnswers] = useState<[]>() useEffect(() => { fetch('/...') .then(response => response.json()) .then(data => setQuestions(data)) }, []) useEffect(() => { if (questions.length > 0) { for (let i = 0, l = questions.length; i < l; i++) { setQuestion(questions[i].question) setAnswers(questions[i].answers) } } }, [questions]) return ( <> <p>{JSON.stringify(question)}</p> <p>{JSON.stringify(answers)}</p> </> ) } export default Component
76389510
76389660
I want to add one a column called "Opt-Numbers" to my data frame with the following values Opt-CMM and Opt-MM based on Numbers column. If the value in Numbers column are greater or equal to 4 then it should add Opt-CMM in the same row of that value or if it is less than 4 then add Opt-MM in the same row. I am also showing an example in below df. Given DF. S.NO Numbers P1 2 P2 5 P3 2 P4 2 P5 3 P6 4 Required DF S.NO Numbers Opt-Numbers P1 2 Opt-MM P2 5 Opt-CMM P3 2 Opt-MM P4 2 Opt-MM P5 3 Opt-MM P6 4 Opt-CMM
Add a Column with a string value based on other column values R
We can do this with dplyr and case_when () library(dplyr) #example data data <- structure(list(S.NO = c("P1", "P2", "P3", "P4", "P5", "P6"), Numbers = c(2, 5, 2, 2, 3, 4)), class = "data.frame", row.names = c(NA, -6L)) create new column with filters data <- data %>% mutate(Opt-Numbers = case_when( Numbers >= 4 ~ "Opt-CMM", Numbers < 4 ~ "Opt-MM" )) data
76387972
76388288
I want to use clip-path to implement effect like: svg with text clip-path, but open in browser it just display an empty svg. I wrap the path with <g> tag, then it failed. <svg xmlns="http://www.w3.org/2000/svg" width="32" height="32" viewBox="0 0 32 32" fill="none" style=" width: 200px; height: 200px; "> <defs> <clipPath id="text-path"> <g clip-rule="evenodd" fill-rule="evenodd"> <path d="M0 0H14V2H2V13H22V10H24V13H32V27H24V32H0V0ZM2 30V27H22V30H2Z"/> <path d="M16 0L24 8H16V0Z"/> <text x="7.5" y="23">PDF</text> </g> </clipPath> </defs> <g clip-path="url(#text-path)" fill="red"> <path d="M0 0H14V2H2V13H22V10H24V13H32V27H24V32H0V0ZM2 30V27H22V30H2Z"/> <path d="M16 0L24 8H16V0Z"/> </g> <style> text {dominant-baseline: hanging;font-size: 8px;font-weight: bold;} </style> </svg> it should be displayed in browser.
How can I use clip-path to achieve a text effect on my SVG in the browser?
In your example (above) you may remove the wrapping group around the 2 paths and the text and it would work. <clipPath id="text-path"> <!-- <g clip-rule="evenodd" fill-rule="evenodd"> --> <path d="M0 0H14V2H2V13H22V10H24V13H32V27H24V32H0V0ZM2 30V27H22V30H2Z"/> <path d="M16 0L24 8H16V0Z"/> <text x="7.5" y="23">PDF</text> <!-- </g> --> </clipPath> However you won't see the text. If what you need is to see the text as a hole in the shapes you will need a mask where the text is black and the 2 paths are white. In the case of the mask you can use a group if you think you need it. <svg xmlns="http://www.w3.org/2000/svg" width="32" height="32" viewBox="0 0 32 32" style=" width: 200px; height: 200px; background:silver"> <mask id="m"> <g fill="white"> <path d="M0 0H14V2H2V13H22V10H24V13H32V27H24V32H0V0ZM2 30V27H22V30H2Z" /> <path d="M16 0L24 8H16V0Z" /> </g> <text x="7.5" y="23" fill="black">PDF</text> </mask> <rect width="32" height="32" mask="url(#m)" fill="red" /> <style> text { dominant-baseline: hanging; font-size: 8px; font-weight: bold; } </style> </svg>
76390686
76391032
<template> <v-list-item v-for="(category, i) in categories" :key="i"> <v-item-group multiple @update:model-value="selectedChanged(category)"> <v-item></v-item> </v-item-group> </v-list-item> </template> <script> function selectedChanged(category) { return function(items) { console.log(`select ${items} from ${category}`); } } </script> I hope that, in function selectedChanged, I can know which category was selected. But it doesn't work. Vue just called selectedChanged with parameter category. The reason why I want to do this is that, if I define selectedChanged as follows: function selectedChanged(items) { console.log(items); } I don't know which category was selected. How to impelement function selectedChanged so that I can know which category was selected?
Vuejs. is it possible to pass a function returned from another function to event handler?
Like Estus said in the comments, it looks like what you want can probably be achieved by explicitly passing $event as one of your event handler arguments. (Here is a link to the relevant Vue 3 documentation on $event.) The code would look something like this: <template> <v-list-item v-for="(category, i) in categories" :key="i"> <v-item-group multiple @update:model-value="selectedChanged(category, $event)"> <v-item></v-item> </v-item-group> </v-list-item> </template> <script> function selectedChanged(category, items) { console.log(`select ${items} from ${category}`); } </script> Of course, this solution assumes that the items you're looking for are emitted by the update:model-value call in (what I assume to be) Vuetify. You might need to do some destructuring or refactoring to get precisely what you want.
76389651
76389684
I am trying to implement Job Shop Scheduling with CP-SAT, but I have more than 20000 tasks to schedule and finding optimal solution will take too much time. I'm using solver time limit, but sometimes it gives me feasible solution in assumed time and sometimes not. Could anyone show how to limit solver to find the first feasible solution? https://developers.google.com/optimization/scheduling/job_shop I know that I should use SolveWithSolutionCallback, but I don't know how.
OR-TOOLS Job Shop Scheduling - stop when find first feasible solution
set the parameter: stop_after_first_solution to true. See the definition.
76387879
76388302
Using JSONATA, is it possible to exclude certain fields that are nested in a deep structure without using object construction? For example, with the following object { "collection": [ { "id": "ABC", "learningunit": { "metadata": { "show": true, "unitType": { "code": "U", "value": "Unit" } } } }, { "id": "UYE", "learningunit": { "metadata": { "show": false, "unitType": { "code": "C", "value": "COURSE" } } } } ] } can we exclude the field "show" and "value" in order to get the following result. { "collection": [ { "id": "ABC", "learningunit": { "metadata": { "unitType": { "code": "U" } } } }, { "id": "UYE", "learningunit": { "metadata": { "unitType": { "code": "C" } } } } ] } FYI, the following object construction expression does the job but it is cumbersome to write if the object is complex. {"collection":collection. { "id": id, "learningunit": learningunit. { "metadata": metadata. { "unitType": unitType. { "code": code } } } } }
Is it possible to exclude certain fields using JSONATA?
You can make use of the transform operator and remove all 'value' and 'show' fields from the nested structure: $$ ~> | ** | {}, ['show', 'value'] | See it on the live Stedi playground: https://stedi.link/Usc1tpg Note that if you need to clear those on a specific path only, you can also do it more surgically: $$ ~> | *.learningunit.metadata | {}, ['show'] | ~> | *.learningunit.metadata.unitType | {}, ['value'] | Playground: https://stedi.link/Bh8cUiM