QuestionId
stringlengths
8
8
AnswerId
stringlengths
8
8
QuestionBody
stringlengths
91
22.3k
QuestionTitle
stringlengths
17
149
AnswerBody
stringlengths
48
20.9k
76381436
76381929
I am using ST_Intersects to check if two polygons intersect. Relevant part of my query is: SELECT entity_number FROM coordinates WHERE ST_INTERSECTS($1, location) It works well to determine if one polygon crosses the other's surface: I expected ST_Intersects to return false when two polygons share sides, but it does not: I read about other methods like ST_Covers, ST_Contains, ST_ContainsProperly, ST_Within ,ST_DWithin. But i am not sure which one suits my needs. Is there any method that allows two polygons to share sides?
Is there ST_Intersects alternative that allows two(or more) polygons to share sides
You want ST_Overlaps: Returns TRUE if geometry A and B "spatially overlap". Two geometries overlap if they have the same dimension, each has at least one point not shared by the other (or equivalently neither covers the other), and the intersection of their interiors has the same dimension. The overlaps relationship is symmetrical.
76385016
76385223
I'm using React Router v6 and following are my routes: const router = createBrowserRouter([ { path: '/', element: <App />, errorElement: <ErrorPage />, children: [ { index: true, element: <HomePage />, }, { path: '/sign-up', element: <SignUpPage />, }, { path: '/log-in', element: <LogInPage />, }, ], }, ]); const root = ReactDOM.createRoot( document.getElementById('root') as HTMLElement, ); root.render( <React.StrictMode> <RouterProvider router={router} /> </React.StrictMode>, ); The App component contains my app's layout and outputs the route elements using the Outlet component. But now if there's an error that bubbles up to the root route, then the ErrorPage gets displayed as expected, but it doesn't make use of the layout from App... So, how can I reuse my layout from App when the error page gets displayed?
React Router - How can I reuse my layout for the errorElement in the root route?
When there's an error it is kind of an either or kind of scenario. Either conditions are fine and the App component is rendered or there's an error condition and the ErrorPage component is rendered. What you could do is to abstract the layout portion of the App component into a layout component on its own that can render either a passed children prop or the Outlet component for the nested route, and render it in App and also wrap the ErrorPage component. Example: const AppLayout = ({ children }) => ( ... {children ?? <Outlet />} ... ); const App = () => ( ... <AppLayout /> ... ); const router = createBrowserRouter([ { path: "/", element: <App />, // <-- uses Outlet errorElement: ( <AppLayout> // <-- uses children <ErrorPage /> </AppLayout> ), children: [ { index: true, element: <HomePage /> }, { path: "/sign-up", element: <SignUpPage /> }, { path: "/log-in", element: <LogInPage /> } ] } ]);
76381457
76381967
I'm learning flutter and I have made an app that looks like this: I'm facing a problem as to how to fix the container fixed on a particular spot on the screen like it has to be aligned to the top center. Here's the problem I'm facing: Here's the code: class Program7 extends StatefulWidget { const Program7({super.key}); @override State<Program7> createState() => _Program7State(); } class _Program7State extends State<Program7> { double cHeightAndWidth = 300; @override Widget build(BuildContext context) { return SafeArea( child: Column( mainAxisAlignment: MainAxisAlignment.spaceAround, children: [ Container( height: cHeightAndWidth, width: cHeightAndWidth, decoration: BoxDecoration( color: Colors.purple, ), ), Column( children: [ //A bunch of rows of buttons, ], ), ], ), ); } } P.S.: I already tried to fix the container to the top center of another container using align but the purple color somehow bleeds out into the bigger container.
How to fix a container at a particular spot on the screen
The issue is using MainAxisAlignment.spaceAround,. It will use the free space and put half before and another half at end of the child. You can use fixed gap for top(Container). return SafeArea( child: Column( children: [ SizedBox(height: 50), Container( height: cHeightAndWidth, width: cHeightAndWidth, decoration: BoxDecoration( color: Colors.purple, ), ), Spacer(), // or other widget Column( children: [ //A bunch of rows of buttons, ], ), ], ), );
76383196
76383405
Any particular reason about this isn't matching the element with that class? I have checked a million times and can't see what is that I'm doing wrong. $('.lnk-folder').click(function(e) { e.preventDefault(); var header = $(this).parent('thead').find('.folder-header'); console.log($(header)); }); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <thead> <tr> <th colspan="10" style="padding:0px; font-size:120%;background-color:#ff0000;"> <div style="float:left;min-width:20%;"> <a style="color: #000" class="lnk-folder folder-close" data-hash="" href="#"> <i class='fa fa-folder-open'></i> Folder 1 </a> </div> <div style="float:left;height:26px;padding-left:10px;"> <a href="#"><i class='fas fa-file-upload tooltip' style="color:#fff;"><span class="tooltiptext_m">New</span></i></a> </div> </th> </tr> <tr class="folder-header"> <th colspan="2" style='background-color:#0c343d;vertical-align:middle;'> Name </th> <th style='width:7%;background-color:#0c343d;vertical-align:middle;'> Code </th> <th style='width:30%;background-color:#0c343d;vertical-align:middle;'> Act</th> <th style='width:7%;background-color:#0c343d;vertical-align:middle;'> Version</th> </tr> </thead>
Find element by class under the same parent
parent() is your problem. It looks up the DOM exactly one level, to the parent element, but you need to go higher than that. To do so, use closest() $('.lnk-folder').click(function(e) { e.preventDefault(); var header = $(this).closest('thead').find('.folder-header'); console.log($(header)); }); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <table> <thead> <tr> <th colspan="10" style="padding:0px; font-size:120%;background-color:#ff0000;"> <div style="float:left;min-width:20%;"> <a style="color: #000" class="lnk-folder folder-close" data-hash="" href="#"> <i class='fa fa-folder-open'></i> Folder 1 </a> </div> <div style="float:left;height:26px;padding-left:10px;"> <a href="#"><i class='fas fa-file-upload tooltip' style="color:#fff;"><span class="tooltiptext_m">New</span></i></a> </div> </th> </tr> <tr class="folder-header"> <th colspan="2" style='background-color:#0c343d;vertical-align:middle;'> Name </th> <th style='width:7%;background-color:#0c343d;vertical-align:middle;'> Code </th> <th style='width:30%;background-color:#0c343d;vertical-align:middle;'> Act</th> <th style='width:7%;background-color:#0c343d;vertical-align:middle;'> Version</th> </tr> </thead> </table>
76384931
76385224
I have 2 data frames df1 +--------------------+---+--------------------+--------------------+ | ID |B |C | D | +--------------------+---+--------------------+--------------------+ | 1|1.0| 1.0| 1.0| | 2|2.0| 2.0| 2.0| | 3|3.0| 3.0| 3.0| | 4|4.0| 4.0| 4.0| +--------------------+---+--------------------+--------------------+ df2 +--------------------+---+--------------------+--------------------+ | ID |B |C | D | +--------------------+---+--------------------+--------------------+ | 1|100| 1.0| 100| +--------------------+---+--------------------+--------------------+ If ID in df2 matches an ID in df1, I want to replace the row in df1 with the updated values in df2. So the new df1 looks like: df1 +--------------------+---+--------------------+--------------------+ | ID |B |C | D | +--------------------+---+--------------------+--------------------+ | 1|100| 1.0| 100| | 2|2.0| 2.0| 2.0| | 3|3.0| 3.0| 3.0| | 4|4.0| 4.0| 4.0| +--------------------+---+--------------------+--------------------+ I've been trying to figure this out with union and join and just not having any luck yet. I first created a new dataframe based on filtering for the ID of df1 and that works and I called that dataframe matchedDF that looks like: matchedDF (dataframe based on finding a match of ID 1 in df1) +--------------------+---+--------------------+--------------------+ | ID |B |C | D | +--------------------+---+--------------------+--------------------+ | 1|1.0| 1.0| 1.0| +--------------------+---+--------------------+--------------------+ But I don't know if I just want to delete the original ID 1 in df1 and add the new matchedDF or do I somehow want to update the original ID 1 with the matchedDf? Or am I approaching this all wrong? Thanks
How to replace a spark dataframe row with another spark dataframe's row using java
To stay computationally efficient, it's always a good idea to avoid joins/shuffles where possible. This looks like a case where it is possible to avoid joining, have a look at the following code (it is in Scala, but the principles remain the same): // Constructing the 2 dfs val df = Seq( (1, 1.0, 1.0, 1.0), (2, 2.0, 2.0, 2.0), (3, 3.0, 3.0, 3.0), (4, 4.0, 4.0, 4.0) ).toDF("ID", "B", "C", "D") val df2 = Seq( (1, 100, 1.0, 100), (2, 100, 1.0, 100) ).toDF("ID", "B", "C", "D") // Collecting the IDs to be updated into a single Array // IMPORTANT: we make the assumption that this array is not large (in your // example there is only 1 row here, so the array only has 1 element which is // totally fine) val newIds = df2.select("ID").collect.map(_.getInt(0)) // Removing the original rows with the unwanted IDs and unioning the result with // the new rows val output = df .filter(not(col("ID").isin(newIds: _*))) .union(df2) scala> output.show +---+-----+---+-----+ | ID| B| C| D| +---+-----+---+-----+ | 3| 3.0|3.0| 3.0| | 4| 4.0|4.0| 4.0| | 1|100.0|1.0|100.0| | 2|100.0|1.0|100.0| +---+-----+---+-----+ So basically, if we can make the assumption that df2 (with the new values) is small like in your example, you can do something like the following: collect the ID values into a single (undistributed) Array. From your example it seems like this is OK. If the amount of new rows is really large this might not be the best approach filter the original df using the isin method of a column and negating using not (basically removing the rows with the new IDs) union the filtered df and df2, resulting in the rows being updated WITHOUT any expensive operation like a shuffle
76381864
76382001
I have made this code to find the distance between stations but in the output, there is only one value. Can you find the error? df <- data.frame( station = rep(c("A", "B", "C", "D"), each = 20), temperature = rnorm(80), latitude = c(40.7128, 34.0522, 41.8781, 39.9526), longitude = c(-74.0060, -118.2437, -87.6298, -75.1652) ) stations <- unique(df$station) my_points <- matrix(NA, nrow = length(unique(df$station)), ncol = length(unique(df$station))) # Loop through each station combination for (i in 1:length(stations)) { for (j in 1:length(stations)) { # Get temperatures for the two stations lat1 <- df$latitude[df$station == stations[i]] lon1 <- df$longitude[df$station == stations[i]] lat2 <- df$latitude[df$station == stations[j]] lon2 <- df$longitude[df$station == stations[j]] my_points[i, j] <- as.vector(dist(matrix(c(lon1,lon2,lat1,lat2), nrow = 2))) } } distance_df <- as.data.frame(my_points)
Make a loop to find the distance between stations in R
There are two issues here: Your input data frame might not look the way you expect it to - the latitude and longitude columns are recycled so you have multiple different coordinates for the same station. Try adding rep() in the lat and long columns as well as station. In your code lat1 <- df$latitude[df$station == stations[i]] returns a vector, because there are multiple matches. I think you're expecting a single value. Use only the first matching element (since they are now all the same elements in the vector after adding rep() as above): df <- data.frame( station = rep(c("A", "B", "C", "D"), each = 20), temperature = rnorm(80), latitude = rep(c(40.7128, 34.0522, 41.8781, 39.9526), each = 20), longitude = rep(c(-74.0060, -118.2437, -87.6298, -75.1652), each = 20) ) stations <- unique(df$station) my_points <- matrix(NA, nrow = length(unique(df$station)), ncol = length(unique(df$station))) # Loop through each station combination for (i in 1:length(stations)) { for (j in 1:length(stations)) { # Get temperatures for the two stations lat1 <- df$latitude[df$station == stations[i]][1] lon1 <- df$longitude[df$station == stations[i]][1] lat2 <- df$latitude[df$station == stations[j]][1] lon2 <- df$longitude[df$station == stations[j]][1] my_points[i, j] <- as.vector(dist(matrix(c(lon1,lon2,lat1,lat2), nrow = 2))) } } distance_df <- as.data.frame(my_points) This gives: V1 V2 V3 V4 1 0.000000 44.73631 13.67355 1.386235 2 44.736313 0.00000 31.59835 43.480707 3 13.673546 31.59835 0.00000 12.612446 4 1.386235 43.48071 12.61245 0.000000 A slightly better way of finding unique stations: unique(df[, c("station", "latitude", "longitude")]) You can then loop over those instead: # Loop through each station combination for (i in 1:length(stations)) { for (j in 1:length(stations)) { # Get temperatures for the two stations lat1 <- unique_df$latitude[unique_df$station == stations[i]] lon1 <- unique_df$longitude[unique_df$station == stations[i]] lat2 <- unique_df$latitude[unique_df$station == stations[j]] lon2 <- unique_df$longitude[unique_df$station == stations[j]] my_points[i, j] <- as.vector(dist(matrix(c(lon1,lon2,lat1,lat2), nrow = 2))) } }
76381890
76382014
Im developing a shiny app with several features. I added a button to download a single pdf file that contains many plots. I want to save those plots in individual pages but I want to choose the size of each pdf page. Is that possible? This is he code that have so far: output$exportall<-downloadHandler( filename="Allplots.pdf", content=function(file){ withProgress(message = 'Exporting', min=0,max=1, { pdf(file,width=8,height=11) print(plot1()) print(histogram()) print(plots2()) print(marrangeGrob(woodsbytimepoint(), nrow=2, ncol=1)) print(digestion()) print(map()) print(marrangeGrob(allplots(), nrow=4, ncol=2, top=NULL)) dev.off() }) } ) The code works fine and exports all the plots that I want. However, all pages in the pdf file are 8x11. Is there a way to speciffy the size of each page? for example I want the first plot to be 7x7 and all other 8x11. Any ideas?
save multiple pdf pages with different sizes in Shiny R
Perhaps the simplest is to create separate PDFs (sized appropriately) and combine them with qpdf::pdf_combine. file <- "file.pdf" pdf(paste0(file, ".8x11"), width=8, height=11) plot(disp ~ mpg, data = mtcars) gg <- ggplot(mtcars, aes(disp, mpg)) + geom_point() print(gg) dev.off() pdf(paste0(file, ".7x7"), width=7, height=7) print(gg) # or anything else dev.off() qpdf::pdf_combine(paste0(file, c(".8x11", ".7x7")), file) file.remove(paste0(file, c(".8x11", ".7x7"))) The resulting file.pdf pages: If your sizes are not always in order (e.g., 8x11, 7x7, 8x11), you can either: create three PDF files (would need an adjusted file name convention) and concatenate in order, or create two PDF files (by dimensions), then also use qpdf::pdf_subset ... though since this creates new PDF files that you would then need to include in pdf_combine, it hardly seems the most efficient method. I cannot test this, but I think this means your code should be output$exportall<-downloadHandler( filename="Allplots.pdf", content=function(file){ withProgress(message = 'Exporting', min=0,max=1, { pdf(paste0(file, ".7x7"), width=7, height=7) print(plot1()) dev.off() pdf(paste0(file, ".8x11"), width=8, height=11) print(histogram()) print(plots2()) print(marrangeGrob(woodsbytimepoint(), nrow=2, ncol=1)) print(digestion()) print(map()) print(marrangeGrob(allplots(), nrow=4, ncol=2, top=NULL)) dev.off() qpdf::pdf_combine(paste0(file, c(".7x7", ".8x11")), output=file) }) } )
76385216
76385238
I have written a little class which reads a text file and which have a method for printing the text (file.output()). For the first call it worked, but the second call of the method nothing is happening. I do not understand why, since I assume that the FOR-Loop does not change anything. class Datei(): def __init__(self, filename): self.fileobject = open(filename) def output(self): for line in self.fileobject: print(line.rstrip()) def end(self): self.fileobject.close() file = Datei("yellow_snow.txt") file.output() print("second try") file.output() file.end() I expected the text of the text file to be printed twice, but it is only printed once.
Why can I only print the text of a text file once?
When you read a file, you move a pointer through it, and it's now at the end - you can .seek(0) to get back to the start (or other positions, 0 is where you started from, which is the beginning if you're not in append mode) with open(path) as fh: print(fh.tell()) # start of file print(fh.read()) # get everything and display it print(fh.tell()) # end of file fh.seek(0) # go back to the beginning print(fh.tell()) # start of file print(fh.read()) More detail in Python Documentation 7.2.1. Methods of File Objects
76381693
76382030
How do I correct my code to be able to order its elements according to which has the canonical vector with a value equal to 1.0 in the element closest to the beginning of its sublists (ignoring the first sublist, which is the one with the titles, although this will also change position according to the position of element 1.0 in the other remaining ones), thus remaining? matrix = [['B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'], [8, 2.0, 1.0, -1.0, 0, 0, 1.0, 0], [2, 1.0, 1.0, 0, 1.0, 0, 0, 0], [8, 1.0, 2.0, 0, 0, -1.0, 0, 1.0]] matrix_aux = [['X4', 'U1', 'U2'], [0, 1.0, 0], [1.0, 0, 0], [0, 0, 1.0]] #Extract the first title sublist titles = matrix_aux.pop(0) #Create a list of tuples, and sort it tuple_list = [(sublist.index(1.0), sublist) for sublist in matrix_aux] sorted_tuples = sorted(tuple_list, key=lambda x: x[0]) #Rebuild the sorted array matrix_aux_ord = [[titles[i] for i in range(len(titles))]] + [sublist for _, sublist in sorted_tuples] print(matrix_aux_ord) for row in matrix_aux_ord: print(row) #print in matrix format the problem now with my code is that it forgets to sort the row of titles or headers ['X4', 'U1', 'U2'], incorrectly printing this matrix [['X4', 'U1', 'U2'], [1.0, 0, 0], [0, 1.0, 0], [0, 0, 1.0]] instead of this that if it maintains consistency [['U1', 'X4', 'U2'], [1.0, 0, 0], [0, 1.0, 0], [0, 0, 1.0]] Then, having these 2 matrices, build the new matrix called new_matrix, in which I would add a column in front of the matrix, that is, I would add an element to each of the sublists that make up the rows of matrix, to the first sublist of the matrix called matrix add an 'X' before it as the first element, and to the rest of the sublists of the matrix called matrix add as the first element in an ordered manner the elements of the first sublist of the matrix called matrix_aux_ord matrix = [['B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'], [8, 2.0, 1.0, -1.0, 0, 0, 1.0, 0], [2, 1.0, 1.0, 0, 1.0, 0, 0, 0], [8, 1.0, 2.0, 0, 0, -1.0, 0, 1.0]] #If the previous code worked, it would get this array sorted like this... matrix_aux_ord = [['U1', 'X4', 'U2'], [1.0, 0, 0], [0, 1.0, 0], [0, 0, 1.0]] #Add the column title or header 'X' to the front of the first sublist of matrix matrix[0].insert(0, 'X') So the resulting final matrix, the correct output, called as new_matrix, would look like this: new_matrix = [['X', 'B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'], ['U1', 8, 2.0, 1.0, -1.0, 0, 0, 1.0, 0], ['X4', 2, 1.0, 1.0, 0, 1.0, 0, 0, 0], ['U2', 8, 1.0, 2.0, 0, 0, -1.0, 0, 1.0]] What should I do to get the matrix_aux_ord correctly, and with it to be able to get the matrix new_matrix which basically consists of a way to combine amber matrices, matrix and matrix_aux_ord?
How to sort array elements based on the closest occurrence of a sublist with a numeric value of 1.0? And then combine this sorted matrix with another
You need to sort the titles together with the other lists. You can do it with zip matrix = [['B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'], [8, 2.0, 1.0, -1.0, 0, 0, 1.0, 0], [2, 1.0, 1.0, 0, 1.0, 0, 0, 0], [8, 1.0, 2.0, 0, 0, -1.0, 0, 1.0]] matrix_aux = [['X4', 'U1', 'U2'], [0, 1.0, 0], [1.0, 0, 0], [0, 0, 1.0]] matrix_aux_ord = list(zip(*sorted(zip(*matrix_aux), key=lambda x: x[1:].index(1.0)))) print(matrix_aux_ord) # [('U1', 'X4', 'U2'), (1.0, 0, 0), (0, 1.0, 0), (0, 0, 1.0)] And to add the titles to matrix use list comprehensions titles = ['X'] + list(matrix_aux_ord[0]) new_matrix = [[titles[i]] + matrix[i] for i in range(len(titles))] print(new_matrix) # [['X', 'B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'], ['U1', 8, 2.0, 1.0, -1.0, 0, 0, 1.0, 0], ['X4', 2, 1.0, 1.0, 0, 1.0, 0, 0, 0], ['U2', 8, 1.0, 2.0, 0, 0, -1.0, 0, 1.0]]
76383130
76383421
I am building a chat window. We are currently in the migration phase from Objective-C to SwiftUI and we do support a minimum of iOS 13+. To get behaviors of scroll view where I want to point to the bottom always as default and should be able to scroll up and down seamlessly. Here only problem is here scroll only works when i drag from bubble of chat from other places it doesn't works. I have debug quite long and not able to find the issue. Reverse scroll view code which I got from here https://www.process-one.net/blog/writing-a-custom-scroll-view-with-swiftui-in-a-chat-application/ struct ReverseScrollView<Content>: View where Content: View { @State private var contentHeight: CGFloat = CGFloat.zero @State private var scrollOffset: CGFloat = CGFloat.zero @State private var currentOffset: CGFloat = CGFloat.zero var content: () -> Content // Calculate content offset func offset(outerheight: CGFloat, innerheight: CGFloat) -> CGFloat { let totalOffset = currentOffset + scrollOffset return -((innerheight/2 - outerheight/2) - totalOffset) } var body: some View { GeometryReader { outerGeometry in // Render the content // ... and set its sizing inside the parent self.content() .modifier(ViewHeightKey()) .onPreferenceChange(ViewHeightKey.self) { self.contentHeight = $0 } .frame(height: outerGeometry.size.height) .offset(y: self.offset(outerheight: outerGeometry.size.height, innerheight: self.contentHeight)) .clipped() .animation(.easeInOut) .gesture( DragGesture() .onChanged({ self.onDragChanged($0) }) .onEnded({ self.onDragEnded($0, outerHeight: outerGeometry.size.height)})) } } func onDragChanged(_ value: DragGesture.Value) { // Update rendered offset self.scrollOffset = (value.location.y - value.startLocation.y) } func onDragEnded(_ value: DragGesture.Value, outerHeight: CGFloat) { // Update view to target position based on drag position let scrollOffset = value.location.y - value.startLocation.y let topLimit = self.contentHeight - outerHeight // Negative topLimit => Content is smaller than screen size. We reset the scroll position on drag end: if topLimit < 0 { self.currentOffset = 0 } else { // We cannot pass bottom limit (negative scroll) if self.currentOffset + scrollOffset < 0 { self.currentOffset = 0 } else if self.currentOffset + scrollOffset > topLimit { self.currentOffset = topLimit } else { self.currentOffset += scrollOffset } } self.scrollOffset = 0 } } struct ViewHeightKey: PreferenceKey { static var defaultValue: CGFloat { 0 } static func reduce(value: inout Value, nextValue: () -> Value) { value = value + nextValue() } } extension ViewHeightKey: ViewModifier { func body(content: Content) -> some View { return content.background(GeometryReader { proxy in Color.clear.preference(key: Self.self, value: proxy.size.height) }) } } Chat window ReverseScrollView { VStack{ HStack { VStack(spacing: 5){ Text("message.text") .padding(.vertical, 8) .padding(.horizontal) .background(Color(.systemGray5)) .foregroundColor(.primary) .clipShape(ChatBubble(isFromCurrentUser: false)) .frame(maxWidth: .infinity, alignment: .leading) .padding(.horizontal) .lineLimit(nil) // Allow unlimited lines .lineSpacing(4) // Adjust line spacing as desired .fixedSize(horizontal: false, vertical: true) // Allow vertical expansion Text("ormatTime(message.timeUtc)") .font(.caption) .foregroundColor(.secondary) .background(Color.red) .frame(maxWidth: .infinity, alignment: .leading) .padding(.horizontal, 5) } .background(Color.blue) Spacer() } ForEach(Array(viewModel.chats.indices), id: \.self){ index in let message = viewModel.chats[index] VStack(alignment: .leading, spacing: 5) { // Chat bubble view for received messages if(message.isIncoming){ HStack { VStack(spacing: 5){ Text(message.text) .padding(.vertical, 8) .padding(.horizontal) .background(Color(.systemGray5)) .foregroundColor(.primary) .clipShape(ChatBubble(isFromCurrentUser: false)) .frame(maxWidth: .infinity, alignment: .leading) .padding(.horizontal) .lineLimit(nil) // Allow unlimited lines .lineSpacing(4) // Adjust line spacing as desired .fixedSize(horizontal: false, vertical: true) // Allow vertical expansion .frame(maxWidth: .infinity, alignment: .leading) Text(formatTime(message.timeUtc)) .font(.caption) .foregroundColor(.secondary) .frame(maxWidth: .infinity, alignment: .leading) .padding(.horizontal, 5) } Spacer() } }else{ HStack { Spacer() VStack(spacing: 5){ Text(message.text) .padding(.vertical, 8) .padding(.horizontal) .background(Color(.systemBlue)) .foregroundColor(.white) .clipShape(ChatBubble(isFromCurrentUser: true)) .padding(.horizontal) .lineLimit(nil) // Allow unlimited lines .lineSpacing(4) // Adjust line spacing as desired .fixedSize(horizontal: false, vertical: true) // Allow vertical expansion Text(formatTime(message.timeUtc)) .font(.caption) .foregroundColor(.secondary) .frame(maxWidth: .infinity, alignment: .leading) .padding(.horizontal, 5) } .frame(maxWidth: .infinity, alignment: .trailing) } } } } if(viewModel.messageSending) { VStack(spacing: 5){ HStack { Spacer() Text(sendingText) .padding(.vertical, 8) .padding(.horizontal) .background(Color(.systemBlue)) .foregroundColor(.white) .clipShape(ChatBubble(isFromCurrentUser: true)) .padding(.horizontal) } HStack { Spacer() ChatBubbleAnimationView() .padding(.trailing, 8) } } .padding(.bottom, 20) .onDisappear(){ sendingText = "" messageText = "" } } } } Chat bubble wrapper struct ChatBubble: Shape { var isFromCurrentUser: Bool func path(in rect: CGRect) -> Path { let path = UIBezierPath(roundedRect: rect, byRoundingCorners: isFromCurrentUser ? [.topLeft, .bottomLeft, .bottomRight] : [.topRight, .bottomLeft, .bottomRight], cornerRadii: CGSize(width: 12, height: 12)) return Path(path.cgPath) } } Please let me know something other information need. I am looking for suggestions to get the behaviours keeping in mind it should support iOS 13+ or any help to get above code fixed.
Custom Reverse Scroll view in SwiftUI
One option is to just flip the built-in ScrollView upside down. import SwiftUI struct ReverseScroll: View { var body: some View { ScrollView{ ForEach(ChatMessage.samples) { message in HStack { if message.isCurrent { Spacer() } Text(message.message) .padding() .background { RoundedRectangle(cornerRadius: 10) .fill(message.isCurrent ? Color.blue : Color.gray) } if !message.isCurrent { Spacer() } } }.rotationEffect(.degrees(180)) //Flip View upside down oldest above newest below. }.rotationEffect(.degrees(180)) //Reverse so it works like a chat message } } struct ReverseScroll_Previews: PreviewProvider { static var previews: some View { ReverseScroll() } } struct ChatMessage: Identifiable, Equatable{ let id: UUID = .init() var message: String var isCurrent: Bool static let samples: [ChatMessage] = (0...25).map { n in .init(message: n.description + UUID().uuidString, isCurrent: Bool.random()) } } The scroll indicators show on the left with this but can be hidden in iOS 16+ with .scrollIndicators(.hidden) If you decide to support iOS 14+ you can use ScrollViewReader to scroll to the newest message. struct ReverseScroll: View { @State private var messages = ChatMessage.samples var body: some View { VStack{ ScrollViewReader { proxy in ScrollView{ ForEach(messages) { message in HStack { if message.isCurrent { Spacer() } Text(message.message) .padding() .background { RoundedRectangle(cornerRadius: 10) .fill(message.isCurrent ? Color.blue : Color.gray) } if !message.isCurrent { Spacer() } } .id(message.id) //Set the ID }.rotationEffect(.degrees(180)) }.rotationEffect(.degrees(180)) .onChange(of: messages.count) { newValue in proxy.scrollTo(messages.last?.id) //When the count changes scroll to latest message } } Button("add") { messages.append( ChatMessage(message: Date().description, isCurrent: Bool.random())) } } } }
76384846
76385239
I have a React website where I have 2 toggles for different kinds of cards - one of them is Live markets (this type has a timer component). Here is the problem- When I switch to classifieds and I switch back to live markets - Auction timer for the first card becomes NaN. Note: this only happens to the first card, the other timers are fine. I have a CardsLayout component, which send a request to the server for data when the above toggle is changed. And if it is in Live Markets tab, then the CardsLayout component maps each object to an AuctionCard component which has a Timer component inside it. Here is the code for the Timer component- import { useState, useEffect } from 'react'; export default function Timer({ id, endTime}) { const [remainingTime, setRemainingTime] = useState(getRemainingTime()); useEffect(() => { const interval = setInterval(() => { setRemainingTime(getRemainingTime()); }, 1000); return () => clearInterval(interval); }, []); function getRemainingTime() { const now = new Date(); const end = new Date(endTime); const diff = end.getTime() - now.getTime(); const days = Math.floor(diff / (1000 * 60 * 60 * 24)); const hours = Math.floor((diff / (1000 * 60 * 60)) % 24); const minutes = Math.floor((diff / (1000 * 60)) % 60); const seconds = Math.floor((diff / 1000) % 60); return { days, hours, minutes, seconds }; } console.log('endtime-',id,endTime) console.log('remtime-',id,remainingTime.seconds) return ( <div className={remainingTime.seconds < 0 ? 'timer-ended' : 'timer'}> {remainingTime.seconds < 0 ? ( "Auction over" ) : ( <> Auction time remaining: {remainingTime.days} Days {remainingTime.hours}: {remainingTime.minutes}:{remainingTime.seconds} </> )} </div> ); } I also have 2 console statements. The values are getting printed every second. Here is whats getting printed for the Timer with NaN- endtime- 4 2023-06-09T20:30:00.000Z remtime- 4 NaN
The first timer in a react component list is getting value NaN
Since it works after you add id and endTime as dependencies to the useEffect (as mentioned in the comments of the OP), it seems that the issue is that the first render you do of the first Time is done without/or with a wrong endTime so it end up displaying NaN. Subsequent renders, I assume after fetching the data from somewhere, provide a valid endTime value for that property. Initially the change in that prop would not alter the functionality of the ongoing interval, since the getRemainingTime would refer to the initial value of endTime. There are a few solutions to this problem. Do not render the Timer component until after you have valid data to provide to it. This need to be handled at the component using the Timer and not inside it. Provide a key for the Timer component when using it (<Timer key={/*what ever you use for id will most likely work here too*/} id={..} endTime={..} /> that is unique and would change once you get the data from the remote location. Use correct dependencies for the useEffect. For 3. the more correct approach is not to add id and endTime to the useEffect but to use useCallback, with endTime for the getRemainingTime and then use that as dependency for the useEffect. const getRemainingTime = useCallback(function() { const now = new Date(); const end = new Date(endTime); const diff = end.getTime() - now.getTime(); const days = Math.floor(diff / (1000 * 60 * 60 * 24)); const hours = Math.floor((diff / (1000 * 60 * 60)) % 24); const minutes = Math.floor((diff / (1000 * 60)) % 60); const seconds = Math.floor((diff / 1000) % 60); return { days, hours, minutes, seconds }; }, [endTime]); useEffect(() => { const interval = setInterval(() => { setRemainingTime(getRemainingTime()); }, 1000); return () => clearInterval(interval); }, [getRemainingTime]);
76381953
76382049
Assuming the following data: df <- data.frame(a = 1:3, b = c(1, 2, 6), c = c(4, 6, NA), d = c(6, NA, NA)) a b c d 1 1 1 4 6 2 2 2 6 NA 3 3 6 NA NA And what I want is: a b c d 1 1 6 4 1 2 2 6 2 NA 3 3 6 NA NA I thought about some combination of across and rev, but my current attempts don't work.
Reverse the content order of several columns (ideally in tidyverse)
You can do the following: pivot_longer(df, -a) %>% filter(!is.na(value)) %>% mutate(value=rev(value), .by=a) %>% pivot_wider(names_from = name, values_from = value) Output: a b c d <int> <dbl> <dbl> <dbl> 1 1 6 4 1 2 2 6 2 NA 3 3 6 NA NA
76380645
76382058
I'm looking to create using Bicep, diagnostic settings on a firewall in one location and save to an Event Hub in another location. The two vnets are peered, but I am wondering if it is possibe based on this error message: Resource '/subscriptions/123/resourceGroups/ukw-rg/providers/Microsoft.Network/azureFirewalls/ukw-fw' is in region 'ukwest' and resource '/subscriptions/123/resourcegroups/uks-rg/providers/microsoft.eventhub/namespaces/uks-evhns' is in region 'uksouth'
Azure Diagnostic Logs saved to another location
You are correct. It isn't possible. https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal#destination-limitations "The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional." kind regards Alistair
76381970
76382106
My embed tag keeps downloading the video instead of displaying it. I have tried changing the file type of the tag, but it just downloads it in a different format. I want the tag to display the video. Here's my code below. <embed type="video/mp4" src="videos/ymcaHome.mp4" width="400" height="300">
How do I get my embed tag to display videos instead of downloading them?
You can do so by specifying the video url or the path as the src attribute value. Like this: <embed src="your_video_file_url.mp4" type="video/mp4" with="640" height="360">
76382470
76383422
Can someone explain me why the following code fails for GCC 8.5 with NaNs? bool isfinite_sse42(float num) { return _mm_ucomilt_ss(_mm_set_ss(std::abs(num)), _mm_set_ss(std::numeric_limits<float>::infinity())) == 1; } My expectation for GCC 8.5 would be to return false. The Intel Intrinsics guide for _mm_ucomilt_ss says RETURN ( a[31:0] != NaN AND b[31:0] != NaN AND a[31:0] == b[31:0] ) ? 1 : 0 i.e., if either a or b is NaN it returns 0. On assembly level (Godbolt) one can see a ucomiss abs(x), Infinity followed by a setb. # GCC8.5 -O2 doesn't match documented intrinsic behaviour for NaN ucomiss xmm0, DWORD PTR .LC2[rip] setb al Interestingly newer GCCs and Clang swap the comparison from a < b to b > a and therefore use seta. But why does the code with setb returns true for NaN and why seta returns false for NaN?
What causes the different NaN behavior when compiling `_mm_ucomilt_ss` intrinsic?
GCC is buggy before GCC13, not implementing the documented semantics of the intrinsic for the NaN case which require either checking PF separately, or doing it as ucomiss Inf, abs so the unordered case sets CF the same way as abs < Inf. See https://www.felixcloutier.com/x86/ucomiss#operation or the nicer table in https://www.felixcloutier.com/x86/fcomi:fcomip:fucomi:fucomip . (All x86 scalar FP compares that set EFLAGS do it the same way, matching historical fcom / fstsw / sahf.) Comparison Results ZF PF CF left > right 0 0 0 left < right 0 0 1 left = right 1 0 0 Unordered 1 1 1 Notice that CF is set for both the left < right and unordered cases, but not for the other two cases. If you can arrange things such that you can check for > or >=, you don't need to setnp cl / and al, cl to rule out Unordered. This is what clang 16 and GCC 13 do to get correct results from ucomiss inf, abs / seta. GCC8.5 does the right thing if you write abs(x) < infinity, it's only the scalar intrinsic that it doesn't implement properly. (With plain scalar code, it uses comiss instead of ucomiss, the only difference being that it will update the FP environment with a #I FP-exception on QNaN as well as SNaN.) This requires a separate movss load instead of a memory source. But this does let GCC avoid the useless SSE4.1 insertps instruction that zeros the high 3 elements of XMM0, which ucomiss doesn't read anyway. Clang sees that and optimizes away that part of _mm_set_ss(num) but GCC doesn't. The lack of an efficient way to go from a scalar float to a __m128 with don't-care upper elements is a persistent problem in Intel's intrinsics API that only some compilers manage to optimize around. (How to merge a scalar into a vector without the compiler wasting an instruction zeroing upper elements? Design limitation in Intel's intrinsics?) A float is just the low element of a __m128.
76385202
76385273
Why won't Binary Search find an element? I have one array with elements: BBBB, BBBB, CCCC. I want to find elements BBBB and BBBB. I want binary search to find two elements and it finds one. The output is "1" and it should be "2". import java.util.*; public class Test{ public static void main(String[] args) { ArrayList<String> bricks = new ArrayList<String>(List.of("BBBB","BBBB","CCCC")); ArrayList<String> bricksNeeded = new ArrayList<String>(List.of("BBBB","BBBB")); int nFound = 0; int index; for(String brickNeeded:bricksNeeded){ index = Collections.binarySearch(bricks, brickNeeded); if(index >= 0){ bricks.remove(bricks.get(index)); nFound ++; break; } } System.out.println(nFound); } } Output: 1 Expected output: 2
Why won't Binary search find an element in Java?
You have statement break - loop will be stopped after first removing. So, nFound will be incremented only once
76381948
76382120
I'm trying to make a selection of elements when I click on empty point and move pointer. In this example I'm expecting to get selection of two elements: I've tried range and selection, but not with the proper result. const mainDiv = document.createElement("div"); mainDiv.style.width = "500px"; mainDiv.style.height = "500px"; document.body.appendChild(mainDiv); const div1 = document.createElement("div"); div1.style.position = "absolute"; div1.style.top = `${50}px`; div1.style.left = `${50}px`; div1.style.width = "100px"; div1.style.height = "100px"; div1.style.background = "red"; mainDiv.appendChild(div1); const div2 = document.createElement("div"); div2.style.top = `${250}px`; div2.style.left = `${250}px`; div2.style.width = "100px"; div2.style.height = "100px"; div2.style.background = "green"; div2.style.position = "absolute"; mainDiv.appendChild(div2); mainDiv.onmousedown = function(event) { function onMouseMove(event) { //add divs to selection } mainDiv.addEventListener('mousemove', onMouseMove); mainDiv.onmouseup = function() { console.log("selected divs") } }
How to select multiple DIVs by mousedown>mousemove>mouseup (pure JS)
You can solve this by keeping an array of selected items and pushing items to it when the mouse moves over the items if the mouse is depressed. const mainDiv = document.createElement("div"); mainDiv.style.width = "500px"; mainDiv.style.height = "500px"; document.body.appendChild(mainDiv); const div1 = document.createElement("div"); div1.style.position = "absolute"; div1.style.top = `${50}px`; div1.style.left = `${50}px`; div1.style.width = "100px"; div1.style.height = "100px"; div1.style.background = "red"; mainDiv.appendChild(div1); const div2 = document.createElement("div"); div2.style.top = `${250}px`; div2.style.left = `${250}px`; div2.style.width = "100px"; div2.style.height = "100px"; div2.style.background = "green"; div2.style.position = "absolute"; mainDiv.appendChild(div2); !(() => { let selection = []; let selecting = false; function beginSelection(e) { selection = []; selecting = true; checkSelection(e); } function mouseMove(e) { checkSelection(e); } function mouseUp(e) { selecting = false; if (selection.length) { console.log("selection: ", selection); // access selection before reset selection = []; } else { // no selection } } function checkSelection(e) { if (!selecting) { return; // ignore } const selected = e.target.parentNode === mainDiv && e.target; if (selected && !selection.includes(selected)) { selection.push(selected); } } mainDiv.addEventListener("mousedown", beginSelection); mainDiv.addEventListener("mousemove", mouseMove); window.addEventListener("mouseup", mouseUp); })();
76381936
76382142
I tried to implement the possibility to use the flash of the phone as a torch in my flutter app. The on/ off button is located in the appbar. This runs fine except the light on and light off Button appear both at the same time. How can I make it, that either one or the other is shown. depending on whether the lamp is on or off? Thank you very much for your help I used the flutter torch_light: ^0.4.0 Class TorchController extends StatelessWidget { const TorchController({super.key}); @override Widget build(BuildContext context) { return Scaffold( body: FutureBuilder<bool>( future: _isTorchAvailable(context), builder: (BuildContext context, AsyncSnapshot<bool> snapshot) { if (snapshot.hasData && snapshot.data!) { return Column( children: [ Expanded( child: Center( child: IconButton ( icon: const Icon(Icons.flashlight_on_outlined,size: 35,), onPressed: () async { _enableTorch(context); }, ), ), ), Expanded( child: Center( child: IconButton (icon: const Icon(Icons.flashlight_off_outlined,size: 35,), onPressed: () async { _disableTorch(context); }, ), ), ), ], ); } else if (snapshot.hasData) { return const Center( child: Text('No torch available.'), ); } else { return const Center( child: CircularProgressIndicator(), ); } }, ), ); } Future<bool> _isTorchAvailable(BuildContext context) async { try { return await TorchLight.isTorchAvailable(); } on Exception catch (_) { _showMessage( 'Could not check if the device has an available torch', context, ); rethrow; } } Future<void> _enableTorch(BuildContext context) async { try { await TorchLight.enableTorch(); } on Exception catch (_) { _showMessage('Could not enable torch', context); } } Future<void> _disableTorch(BuildContext context) async { try { await TorchLight.disableTorch(); } on Exception catch (_) { _showMessage('Could not disable torch', context); } } void _showMessage(String message, BuildContext context) { ScaffoldMessenger.of(context) .showSnackBar(SnackBar(content: Text(message))); } } //Ende```
create on off toggle Icon for flutter torch
First of all, change the widget from stateless to stateful widget. Then define a variable to show the status of the torch isTorchOn = false; on _enableTorch() update the value to true (no need to pass context as it is now a stateful widget) Future<void> _enableTorch(BuildContext context) async { try { await TorchLight.enableTorch(); setState(()=> isTorchOn = true); } on Exception catch (_) { _showMessage('Could not enable torch', context); } } do the same for _disableTorch() as set isTorchOn to false Future<void> _disableTorch(BuildContext context) async { try { await TorchLight.disableTorch(); setState(()=> isTorchOn = false); } on Exception catch (_) { _showMessage('Could not disable torch', context); } }
76383358
76383463
Basically what the title says, I have some large tsv file (approx. 20k lines) and I want to delete the rest of the files after a specific column matches a string a second time (including said line)
Delete all the lines including and after nth occurance of pattern
awk '{print $0} $1=="yourstring"{if(++found==2)exit}' test.tsv Where $1 is the "specific column" and yourstring is the string you are searching for. This prints each line and then checks for the occurrence of yourstring in the first column. If it finds it, it tests a variable found which we increment, to see if it hits 2. If so awk exits. Edit: If instead you want to delete the second occurrence (as well as everything after), flipping the two blocks around will accomplish this: awk ' $1=="yourstring"{if(++found==2)exit}{print $0}' test.tsv
76384747
76385292
This is the flag that I have to get at the end: ******************* ** * ** * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *** * ******************* * *** * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ** * ** ******************* I know how to do full star triangles but when it's empty on the inside I have no idea about how to proceed. Can anyone help me? I tried and I just know how to do full star triangles and a star square/rectangle empty on the inside here is the code: int main(void) { int i, j, length, width; cout << "Length of rectangle? "; cin >> length; cout << endl; cout << "Width of rectangle? "; cin >> width; cout << endl; for ( i = 0; i < length; i++ ) cout << "*"; cout << endl; for ( i = 1; i < width - 1; i++ ) { cout << "*"; for ( j = 1; j < length - 1; j++ ) { cout << " "; } cout << "*"; cout << endl; } for ( i = 0; i < length; i++) cout << "*"; cout << endl; return 0; }
how can I do an empty triangle with stars in c++ to after do the british flag?
Lines 1, 10 and 19 are easy, as they each consist only of 19 *. The problem is the lines 2 to 9 and 11 to 19. However, do you notice a pattern in lines 2 to 9? Line 2 consists of one * followed by 0 spaces followed by one * followed by 7 spaces followed by one * followed by 7 spaces followed by one * followed by 0 spaces followed by one * Line 3 consists of one * followed by 1 spaces followed by one * followed by 6 spaces followed by one * followed by 6 spaces followed by one * followed by 1 spaces followed by one *. Line 4 consists of one * followed by 2 spaces followed by one * followed by 5 spaces followed by one * followed by 5 spaces followed by one * followed by 2 spaces followed by one *. Line 5 consists of one * followed by 3 spaces followed by one * followed by 4 spaces followed by one * followed by 4 spaces followed by one * followed by 3 spaces followed by one *. Line 6 consists of one * followed by 4 spaces followed by one * followed by 3 spaces followed by one * followed by 3 spaces followed by one * followed by 4 spaces followed by one *. Line 7 consists of one * followed by 5 spaces followed by one * followed by 2 spaces followed by one * followed by 2 spaces followed by one * followed by 5 spaces followed by one *. Line 8 consists of one * followed by 6 spaces followed by one * followed by 1 spaces followed by one * followed by 1 spaces followed by one * followed by 6 spaces followed by one *. Line 9 consists of one * followed by 7 spaces followed by one * followed by 0 spaces followed by one * followed by 0 spaces followed by one * followed by 7 spaces followed by one *. The pattern is the following: Assuming that size is the total size of the triangle (which is 19 in your case), then line n consists of one * followed by n-2 spaces followed by one * followed by (size/2) - n spaces followed by one * followed by (size/2) - n spaces followed by one * followed by n-2 spaces followed by one *. Note that in C, the result of 19 / 2 is 9, as the fractional part of the division is discarded. Using this information about the pattern, you should be able to create a loop that in every loop iteration, prints one line as described above. That way, you should be able to solve the problem of printing the lines 2 to 9. Printing the lines 11 to 19 should be easy afterwards, because these lines must only be printed in reverse order of the lines 2 to 9. In accordance with the community guidelines for homework questions, I will not provide the full solution at this time. I can provide further information later, if necessary. EDIT: Since several other solutions have already been posted by other users, I will now also post my solution, which solves the problem as described above: #include <iostream> const int MAP_SIZE = 19; static_assert( MAP_SIZE % 2 == 1, "MAP_SIZE must be odd" ); int main( void ) { //print first horizontal line for ( int i = 0; i < MAP_SIZE; i++ ) std::cout << '*'; std::cout << '\n'; //print top half of flag for ( int i = 0; i < MAP_SIZE / 2 - 1; i++ ) { std::cout << '*'; for ( int j = 0; j < i; j++ ) std::cout << ' '; std::cout << '*'; for ( int j = 0; j < MAP_SIZE/2 - 2 - i; j++ ) std::cout << ' '; std::cout << '*'; for ( int j = 0; j < MAP_SIZE/2 - 2 - i; j++ ) std::cout << ' '; std::cout << '*'; for ( int j = 0; j < i; j++ ) std::cout << ' '; std::cout << '*'; std::cout << '\n'; } //print second horizontal line for ( int i = 0; i < MAP_SIZE; i++ ) std::cout << '*'; std::cout << '\n'; //print bottom half of flag for ( int i = 0; i < MAP_SIZE / 2 - 1; i++ ) { std::cout << '*'; for ( int j = 0; j < MAP_SIZE/2 - 2 - i; j++ ) std::cout << ' '; std::cout << '*'; for ( int j = 0; j < i; j++ ) std::cout << ' '; std::cout << '*'; for ( int j = 0; j < i; j++ ) std::cout << ' '; std::cout << '*'; for ( int j = 0; j < MAP_SIZE/2 - 2 - i; j++ ) std::cout << ' '; std::cout << '*'; std::cout << '\n'; } //print third horizontal line for ( int i = 0; i < MAP_SIZE; i++ ) std::cout << '*'; std::cout << '\n'; } However, I think that this problem is easier to solve using a 2D array (which you stated that you are not allowed to use). The 2D array is initialized to spaces and then the 3 horizontal, 3 vertical and 2 diagonal lines are drawn: #include <iostream> const int MAP_SIZE = 19; static_assert( MAP_SIZE % 2 == 1, "MAP_SIZE must be odd" ); int main( void ) { char map[MAP_SIZE][MAP_SIZE]; //initialize 2D array to spaces for ( int i = 0; i < MAP_SIZE; i++ ) for ( int j = 0; j < MAP_SIZE; j++ ) map[i][j] = ' '; //draw the 3 horizontal lines for ( int i = 0; i < MAP_SIZE; i++ ) { map[ 0][i] = '*'; map[MAP_SIZE/2][i] = '*'; map[MAP_SIZE-1][i] = '*'; } //draw the 3 vertical lines for ( int i = 0; i < MAP_SIZE; i++ ) { map[i][ 0] = '*'; map[i][MAP_SIZE/2] = '*'; map[i][MAP_SIZE-1] = '*'; } //draw the 2 diagonal lines for ( int i = 0; i < MAP_SIZE; i++ ) { map[i][ i] = '*'; map[i][MAP_SIZE-i-1] = '*'; } //print the result for ( int i = 0; i < MAP_SIZE; i++ ) { std::cout.write( map[i], MAP_SIZE ); std::cout.put( '\n' ); } }
76384972
76385313
Merge two tables in power query editor (Power BI) based on string similarity with Python Consider the tables bellow: Table1 Table1 Name ... Apple Fruit A11 ... Banana Fruit B12 ... ... ... Table2 Table2 Name Value Apple A11R/T 40 B4n4n4 Fruit B12_T 50 Berry A11 60 ... ... I want to get the Value from Table2 into Table1. But for some reason when I use the built-in power query editor merge with fuzzy matching. It will match Apple Fruit A11 with Berry A11 instead of Apple A11 R/T. I've read the documentation, and it says that the built-in function works best with single words. I tried to remove spaces both from Table1[Name] and Table2[Name] but it didn't improve results. I looked around trying to find a solution, but wasn't able to figure out yet. Is there a way to do this using python? Or is there a simpler solution? The results that I am expecting: Table1 Expected Result Name ... Table2.Name Table2.Value Apple Fruit A11 ... Apple A11R/T 40 Banana Fruit B12 ... B4n4n4 Fruit B12_T 50 ... ... ... ... --- For some reason the tables are not showing up like the preview, that's why there are also images for each table. Disclaimer: The data present in the tables above is just an example of the pattern of the data that I am working with. And fuzzy matching will probably give the right results for the example data.
Is it possible to merge two tables in Power Query Editor (Power BI) with Python fuzzy matching?
Fuzzy matching in Power Query works fine for me. Set your options to the following:
76382116
76382155
/** * C program to find and replace all occurrences of a word in file. */ #include <stdio.h> #include <stdlib.h> #include <string.h> #define BUFFER_SIZE 1000 /* Function declaration */ void replaceAll(char *str, const char *oldWord, const char *newWord); int main() { /* File pointer to hold reference of input file */ FILE * fPtr; FILE * fTemp; char path[100]; char buffer[BUFFER_SIZE]; char oldWord[100], newWord[100]; printf("Enter path of source file: "); scanf("%s", path); printf("Enter word to replace: "); scanf("%s", oldWord); printf("Replace '%s' with: "); scanf("%s", newWord); /* Open all required files */ fPtr = fopen(path, "r"); fTemp = fopen("replace.tmp", "w"); /* fopen() return NULL if unable to open file in given mode. */ if (fPtr == NULL || fTemp == NULL) { /* Unable to open file hence exit */ printf("\nUnable to open file.\n"); printf("Please check whether file exists and you have read/write privilege.\n"); exit(EXIT_SUCCESS); } /* * Read line from source file and write to destination * file after replacing given word. */ while ((fgets(buffer, BUFFER_SIZE, fPtr)) != NULL) { // Replace all occurrence of word from current line replaceAll(buffer, oldWord, newWord); // After replacing write it to temp file. fputs(buffer, fTemp); } /* Close all files to release resource */ fclose(fPtr); fclose(fTemp); /* Delete original source file */ remove(path); /* Rename temp file as original file */ rename("replace.tmp", path); printf("\nSuccessfully replaced all occurrences of '%s' with '%s'.", oldWord, newWord); return 0; } /** * Replace all occurrences of a given a word in string. */ void replaceAll(char *str, const char *oldWord, const char *newWord) { char *pos, temp[BUFFER_SIZE]; int index = 0; int owlen; owlen = strlen(oldWord); // Fix: If oldWord and newWord are same it goes to infinite loop if (!strcmp(oldWord, newWord)) { return; } /* * Repeat till all occurrences are replaced. */ while ((pos = strstr(str, oldWord)) != NULL) { // Backup current line strcpy(temp, str); // Index of current found word index = pos - str; // Terminate str after word found index str[index] = '\0'; // Concatenate str with new word strcat(str, newWord); // Concatenate str with remaining words after // oldword found index. strcat(str, temp + index + owlen); } } I have that code in C which can change all "oldWords" into "newWords". Works fine, but everytime I want to change the code to change the words on its own I'm completely stupid. I want that I don't have to put the words that have to change into the console, but I want to have them in the code. I just want to tell the console the path of the source file and that's it. Would be nice if you could help me with some examples like Hello to Bye and Morning to Night.
How can I modify my C code to replace words in a file without user input?
If you don't want to take oldWord and newWord from user input, you can define them as constants in the code: const char* oldWord = "Hello"; const char* newWord = "Bye";
76383388
76383465
function addNote() { const givenTitle = document.getElementById('titleInput'); const givenNote = document.getElementById('noteInput'); let notesObj = [] let myObj = { title: givenTitle.value, note: givenNote.value, } notesObj.push(myObj) localStorage.setItem('userNote',JSON.stringify(notesObj)); } This code is everyTime changing the older value I want to get a new object in local storage
I want to set new object in local storage
Probably you want to add note into your existing localStorage data so here you can check if there is userNote is already set before then take it or set an empty array, let notesObj = JSON.parse(localStorage.getItem("userNote")) || [] Your whole function would be like this, function addNote() { const givenTitle = document.getElementById('titleInput'); const givenNote = document.getElementById('noteInput'); let notesObj = JSON.parse(localStorage.getItem("userNote")) || [] let myObj = { title: givenTitle.value, note: givenNote.value, } notesObj.push(myObj) localStorage.setItem('userNote',JSON.stringify(notesObj)); }
76385055
76385319
Im building an AuthContext in react to handle login, i connect it to a django backend where i validate the user and then i get an authorization token import React, { createContext, useState, useEffect} from 'react'; import axios from 'axios'; export const AuthContext = createContext(); export const AuthProvider = ({ children }) => { const [token, setToken] = useState(null); const login = async (username, password) => { try { const response = await axios.post('http://127.0.0.1:8000/token/', { username, password }); console.log('Response data:', response.data); const { token: responseToken } = response.data; setToken(responseToken); console.log('Token loaded:', responseToken); } catch (error) { console.error('Error en el inicio de sesión:', error); } }; useEffect(() => { console.log('Token actual value:', token); }, [token]); return ( <AuthContext.Provider value={{ token, login }}> {children} </AuthContext.Provider> ); }; From my backend i get the expected answer, an status 200 with the token that i expect, but after assign the response.data to responseToken, in the next log it shows that is undefined. Here is the output in the navigator console: console logs Tried to change the variable names to check if there is a conflict between those declared there, but the problem persists.
Trying to assign a response.data value to a variable in react and the value doesnt pass well
Your console log shows that response data has two properties refresh & access. But you are trying to get a property called token, which does not exist (or at least it's not visible in your screenshot). const { access: responseToken } = response.data; This should help. (if access is the one you actually need)
76383411
76383481
i have this simple nav component, but it drives me crazy because it makes every console log i make in the app two times. Im new to React btw. export const NavBar = () => { const [showNav, setShowNav] = useState(false); const handleNavClick = () => { setShowNav(!showNav); }; console.log("hi"); return ( <> <nav className="flex items-center justify-between pl-8 pr-16 fixed w-full border h-20 top-0 bg-white/30 backdrop-blur-sm z-10"> {/* Logo */} <img src="https://res.cloudinary.com/dv8nczwtj/image/upload/v1684896617/Logo_jivlnb.png" alt="Logo" className="logo" /> {/* Nav WEB*/} <div className="md:flex flex-row space-x-5 hidden"> <a href="#" className="brand"> Apple </a> <a href="#" className="brand"> Samsung </a> <a href="#" className="brand"> Xiaomi </a> <a href="#" className="brand"> Google </a> </div> {/* BTN Nav Mobil */} <button className="md:hidden" onClick={handleNavClick}> <img src="https://res.cloudinary.com/dv8nczwtj/image/upload/v1684859901/menu_wh8ccz.png" alt="Menu" className="w-6" /> </button> {/* Cart */} <CartWidget /> </nav> {/* Nav Mobil */} {showNav && ( <div className="flex fixed w-full flex-col justify-center items-center space-y-4 pb-2 border-b-2 border-black md:hidden bg-white/30 top-20 pt-4 backdrop-blur-sm" style={{ animation: "fadeIn .5s linear" }} > <a href="#" className="brand"> Apple </a> <a href="#" className="brand"> Samsung </a> <a href="#" className="brand"> Xiaomi </a> <a href="#" className="brand"> Google </a> </div> )} </> ); }; double console.log I have tried putting the handleClick function in a useEffect but then when i put it on the onClick it says that the function is never declared
ReactJs why do i get two console logs in a simple component?
Is your app using React.StrictMode (https://react.dev/reference/react/StrictMode#fixing-bugs-found-by-double-rendering-in-development) ?
76381116
76382168
Problem: running get_pi_ij() gives the error: Error in as.vector(x, mode) : cannot coerce type 'closure' to vector of type 'any' Called from: as.vector(data) The first thing this function does is to make the resulting alphas and beta_prelims into matrixes that match c so that they can be calculated together. This is where something goes wrong, and I have not been able to figure out what. If I use <<- to save the alphas and betas to the global environment in the prior functions for alphas and betas and replace that with those in the faulty function, it works. So I assume it has to do with how I call the functions inside the matrix creation. get_pi_ij <- function() { alphas <- matrix(get_alpha(), nrow = length(get_alpha()), ncol = length(get_alpha()), byrow = FALSE) betas <- matrix(get_beta_prelim, nrow = length(get_beta_prelim()), ncol = length(get_beta_prelim()), byrow = TRUE) pi_ij <- exp(alphas + betas + gamma * c) return(pi_ij) } get_pi_ij() I added the full code cause it's not too long and the first parts are just definitions. Makes it easier to test it. Everything up to the final function works as it is supposed to size <- 18 gamma <- -0.07 c <- structure(c(0, 4, 8, 12, 16, 4, 4, 8, 12, 16, 8, 8, 8, 12, 16, 12, 12, 16, 16, 4, 0, 4, 8, 12, 8, 4, 4, 8, 12, 12, 8, 8, 12, 16, 12, 12, 16, 16, 8, 4, 0, 4, 8, 12, 8, 4, 8, 12, 16, 12, 8, 12, 16, 16, 12, 16, 16, 12, 8, 4, 0, 4, 12, 8, 4, 4, 8, 16, 12, 8, 8, 12, 16, 12, 12, 16, 16, 12, 8, 4, 0, 16, 12, 8, 4, 4, 16, 12, 8, 8, 8, 16, 12, 12, 16, 4, 8, 12, 12, 16, 0, 4, 8, 12, 16, 4, 4, 8, 12, 16, 8, 8, 12, 12, 4, 4, 8, 8, 12, 4, 0, 4, 8, 12, 8, 4, 4, 8, 12, 8, 8, 12, 12, 8, 4, 4, 4, 8, 8, 4, 0, 4, 8, 12, 8, 4, 8, 12, 12, 8, 12, 12, 12, 8, 8, 4, 4, 12, 8, 4, 0, 4, 12, 8, 4, 4, 8, 12, 8, 8, 12, 16, 12, 12, 8, 4, 16, 12, 8, 4, 0, 16, 12, 8, 4, 4, 12, 8, 8, 12, 8, 12, 16, 16, 16, 4, 8, 12, 12, 16, 0, 4, 8, 12, 16, 4, 8, 12, 8, 8, 8, 12, 12, 12, 4, 4, 8, 8, 12, 4, 0, 4, 8, 12, 4, 4, 8, 8, 8, 8, 8, 8, 8, 8, 4, 4, 4, 8, 8, 4, 0, 4, 8, 8, 4, 8, 8, 12, 12, 12, 8, 8, 12, 8, 8, 4, 4, 12, 8, 4, 0, 4, 8, 4, 4, 8, 16, 16, 16, 12, 8, 16, 12, 12, 8, 4, 16, 12, 8, 4, 0, 12, 8, 4, 8, 12, 12, 16, 16, 16, 8, 8, 12, 12, 12, 4, 4, 8, 8, 12, 0, 4, 8, 4, 12, 12, 12, 12, 12, 8, 8, 8, 8, 8, 8, 4, 4, 4, 8, 4, 0, 4, 4, 16, 16, 16, 12, 12, 12, 12, 12, 8, 8, 12, 8, 8, 4, 4, 8, 4, 0, 4, 16, 16, 16, 16, 16, 12, 12, 12, 12, 12, 8, 8, 8, 8, 8, 4, 4, 4, 0), .Dim = c(19L, 19L), .Dimnames = list(NULL, c("X1", "X2", "X3", "X4", "X5", "X6", "X7", "X8", "X9", "X10", "X11", "X12", "X13", "X14", "X15", "X16", "X17", "X18", "X19"))) set.seed(12345) h_share <- diff(c(0, sort(runif(size)), 1)) e_share <- diff(c(0, sort(runif(size)), 1)) alpha <- numeric(size + 1) beta <- numeric(size + 1) get_beta_prelim <- function() { a_matrix <- exp(alpha + t(c) * gamma) beta_prelim <- log(e_share) - log(colSums(a_matrix)) return(beta_prelim) } get_beta <- function() { beta <- get_beta_prelim() - get_beta_prelim()[[1]] return(beta) } get_alpha_prelim <- function() { b_matrix <- t(exp(beta + t(c) * gamma)) alpha_prelim <- log(h_share) - log(rowSums(b_matrix)) return(alpha_prelim) } get_alpha <- function() { alpha <- get_alpha_prelim() - get_alpha_prelim()[[1]] return(alpha) } get_pi_ij <- function() { alphas <- matrix(get_alpha(), nrow = length(get_alpha()), ncol = length(get_alpha()), byrow = FALSE) betas <- matrix(get_beta_prelim, nrow = length(get_beta_prelim()), ncol = length(get_beta_prelim()), byrow = TRUE) pi_ij <- exp(alphas + betas + gamma * c) return(pi_ij) } get_pi_ij()
Error in as.vector(x, mode) : cannot coerce type 'closure' to vector of type 'any' -- when running a nested function
get_pi_ij <- function() { alphas <- matrix(get_alpha(), nrow = length(get_alpha()), ncol = length(get_alpha()), byrow = FALSE) betas <- matrix(get_beta_prelim(), # your were missing "()" nrow = length(get_beta_prelim()), ncol = length(get_beta_prelim()), byrow = TRUE) pi_ij <- exp(alphas + betas + gamma * c) return(pi_ij) }
76385032
76385330
I have a Fedora 38 (6.1.29-1) server with Ruby and the Compass gem installed. When I try to execute compass -h or perform any compass compiling, I get a NoMethodError (on different lines of different .rb files, but errors nonetheless). I've looked all around for similar errors and can't seem to find anyone else that experiences this problem. At first I thought maybe the latest version (1.0.3) of Compass doesn't work on my server, so I also tried 1.0.0 but still get the same error. I also tried installing the same version(s) and followed the same process on my Windows machine and had no issues when executing the same compass -h and compass compile commands. Anyone have any idea what is causing this error on my fedora server? When executing "compass -h" on the command line on the Fedora server... Current Output: NoMethodError on line ["144"] of /home/user1/.local/share/gem/ruby/gems/compass-1.0.0/lib/compass/installers/manifest.rb: undefined method `exists?' for File:Class Expected Output: Usage: compass help [command] Description: The Compass Stylesheet Authoring Framework helps you build and maintain your stylesheets and makes it easy for you to use stylesheet libraries provided by others. Donating: Compass is charityware. If you find it useful please make a tax deductable donation: http://umdf.org/compass To get help on a particular command please specify the command. Primary Commands: * clean - Remove generated files and the sass cache * compile - Compile Sass stylesheets to CSS * create - Create a new compass project * init - Add compass to an existing project * watch - Compile Sass stylesheets to CSS when they change Other Commands: * config - Generate a configuration file for the provided command line options. * extension - Manage the list of compass extensions on your system * frameworks - List the available frameworks * help - Get help on a compass command or extension * imports - Emit an imports suitable for passing to the sass command-line. * install - Install an extension's pattern into your compass project * interactive - Interactively evaluate SassScript * sprite - Generate an import for your sprites. * stats - Report statistics about your stylesheets * unpack - Copy an extension into your extensions folder. * validate - Validate your generated css. * version - Print out version information Available Frameworks & Patterns: * compass - compass/ellipsis - Plugin for cross-browser ellipsis truncated text. - compass/extension - Generate a compass extension. - compass/project - The default project layout. Global Options: -r, --require LIBRARY Require the given ruby LIBRARY before running commands. This is used to access compass plugins without having a project configuration file. -l, --load FRAMEWORK_DIR Load the framework or extensions found in the FRAMEWORK directory. -L, --load-all FRAMEWORKS_DIR Load all the frameworks or extensions found in the FRAMEWORKS_DIR directory. -I, --import-path IMPORT_PATH Makes files under the IMPORT_PATH folder findable by Sass's @import directive. -q, --quiet Quiet mode. --trace Show a full stacktrace on error --force Allows compass to overwrite existing files. --boring Turn off colorized output. -?, -h, --help Show this message
How to Fix NoMethodError Issue with Ruby Compass (1.0.0 and 1.0.3)
File.exists? was deprecated for several minor versions and existed until Ruby 2.7. And was finally removed in Ruby 3.0. Whereas the last version of the compass gem is more than 8 years old. That means it doesn't work with current version of Ruby anymore. You have basically three options: Downgrade your Ruby version to, for example, 2.7.8. That version is not terrible out-dated, but keep in mind that Ruby 2.7 reached end-of-life, it will not get any security or bug fixes anymore. Fork the compass gem and fix the usage of File.exists? with File.exist?. This seems to be a quick fix, but given that this gem didn't get any update in the last 8 years, you might discover further compatibility issues or unfixed bugs. Search for an alternative and replace that gem.
76384401
76385345
How to obtain path to folder in which user made right click in its background to invoke context menu? For example, user opened "D:\projects" folder and made right click in empty background area of that folder and it sees a menu item in context menu named 'Display Path'. Upon clicking it, it should invoke a simple console app to display string "D:\projects". It can be done by registry by adding "%V" as argument to command to console app, for example, "C:\myfolder\myapp.exe" "%V". Hence, this %V gives folder path to argument list of main() of myuapp.exe. Easy huh! How it can be done using shell extension menu handler? I wrote a simple shell context menu dll which works fine and do its job, except that I don't known how to get that folder path as string where user made right click in background. I found that path comes as PCIDLIST_ABSOLUTE pidl argument in IShellExtInit::Initialize() method. But, I couldn't get it in simple string format. The code is below which crashes, of course. HRESULT __stdcall Initialize(PCIDLIST_ABSOLUTE pidlFilder, IDataObject* pdtobj, HKEY hkeyProgID) { std::wstring s = L"null"; // check msg, this msgbox is shown as expected MessageBox(NULL, L"Before", L"Initialize()", MB_OK); //have problem in this line, I guess SHGetPathFromIDList((LPCITEMIDLIST) pidlFilder, (PWSTR) &s); // check msg, sometimes this msgbox is also shown as expected MessageBox(NULL, L"After", L"Initialize()", MB_OK); // but this msgbox is never shown. I removed it but code still crashes MessageBox(NULL, std::wstring(s).c_str(), L"Initialize()", MB_OK); return S_OK; } When I right click on folder background, it crashes and explorer restarts. Does anyone know the problem and its solution? How to get folder path when right clicking background of folder to invoke context menu using shell extension? In addition, how to get file/folder path when right clicking on it to invoke context menu using shell extension? Thanks in advance tried using this code too, still crashes IShellFolder *sf = NULL; STRRET pName = {}; sf->GetDisplayNameOf(pidlFilder, SHGDN_FORPARSING, &pName); wchar_t *d = new wchar_t; lstrcpyW(d,L"nulld"); size_t inst = MAX_PATH, outst ; mbstowcs_s(&outst, d, inst, pName.cStr, MAX_PATH); s = std::wstring(d); MessageBox(NULL, std::wstring(s).c_str(), L"Initialize()", MB_OK);
How can I obtain the folder path when right-clicking the background of a folder and invoking a context menu using a shell extension?
You are trying to make SHGetPathFromIDList() write the string data to the memory address where a std::wstring object resides, which will not work. Use a fixed WCHAR[] array instead, eg: HRESULT __stdcall Initialize(PCIDLIST_ABSOLUTE pidlFilder, IDataObject* pdtobj, HKEY hkeyProgID) { WCHAR szPath[MAX_PATH] = {}; SHGetPathFromIDList(pidlFilder, szPath); MessageBox(NULL, szPath, L"Initialize()", MB_OK); return S_OK; } Alternatively, if you want to receive the string data into a std::wstring object, then you have to pre-allocate its internal character buffer and then receive into that buffer, eg: HRESULT __stdcall Initialize(PCIDLIST_ABSOLUTE pidlFilder, IDataObject* pdtobj, HKEY hkeyProgID) { std::wstring s; s.resize(MAX_PATH); SHGetPathFromIDList(pidlFilder, s.data() /* or &s[0] before C++17 */ ); s.erase(s.find(L'\0')); MessageBox(NULL, s.c_str(), L"Initialize()", MB_OK); return S_OK; } Otherwise, you can simply receive into a WCHAR[] and then assign that to your std::wstring, eg: HRESULT __stdcall Initialize(PCIDLIST_ABSOLUTE pidlFilder, IDataObject* pdtobj, HKEY hkeyProgID) { WCHAR szPath[MAX_PATH] = {}; SHGetPathFromIDList(pidlFilder, szPath); std::wstring s = szPath; MessageBox(NULL, s.c_str(), L"Initialize()", MB_OK); return S_OK; } Your 2nd example doesn't work for several reasons: Your IShellFolder *sf doesn't point anywhere meaningful. Use SHGetDesktopFolder() to get the top-level IShellFolder object which you can then use to parse pidlFilder. you are allocating only 1 wchar_t for wchar_t *d to point at, but then you are trying to copy more than 1 wchar_t into that memory. You don't really need to allocate any memory at all, as the parsed STRRET already contains the necessary string data, so just use it as-is. Otherwise, you can pass the STRRET to StrRetToBuf() or StrRetToStr() to get the data in a more usable format. you are not paying attention to the STRRET::uType field to know what kind of string data it is holding. Don't access the cStr field unless the uType field is set to STRRET_CSTR. StrRetToBuf()/StrRetToStr() will handle this for you. Try this instead: HRESULT __stdcall Initialize(PCIDLIST_ABSOLUTE pidlFilder, IDataObject* pdtobj, HKEY hkeyProgID) { IShellFolder *sf = NULL; if (SUCCEEDED(SHGetDesktopFolder(&sf)) { STRRET pName = {}; if (SUCCEEDED(sf->GetDisplayNameOf(pidlFilder, SHGDN_FORPARSING, &pName)) { WCHAR szPath[MAX_PATH] = {}; StrRetToBufW(&pName, pidlFilder, szPath, MAX_PATH); MessageBox(NULL, szPath, L"Initialize()", MB_OK); } sf->Release(); } return S_OK; }
76383425
76383491
The example below echoes 1, as expected: test -f /usr/bin echo "$?" #1 Why does the following example echo 0? if [[ -f /usr/bin ]]; then echo "Inside if statement" # This line is never executed fi echo "$?" #0 I know that the $?evaluates to the returned value of the last executed command. In my understanding, the last executed command is test, that is implicitly called by the if statement, since the condition evaluates to false it should return 1, but when I execute it, it returns 0. Can anybody explain why the behavior is different than when test is executed directly (like in the first example)?
Bash if statement expression evaluates to FALSE but $? is 0, why?
According to man bash: if list; then list; [ elif list; then list; ] ... [ else list; ] fi The if list is executed. If its exit status is zero, the then list is executed. Otherwise, each elif list is executed in turn, and if its exit status is zero, the corresponding then list is executed and the command completes. Otherwise, the else list is executed, if present. The exit status is the exit status of the last command executed, or zero if no condition tested true.
76381858
76382169
How do I convert table left to summary table right? I tried using get dummies function to convert values to 0 and 1. I don't know how to proceed after that.
How can I convert a left table into a summary table?
Try this: import pandas as pd import numpy as np col1 = ['']+['Hampshire']*8+['']+['Hampshire']+['']+['Hampshire']+['','']+['Hampshire']*4 col2 = ['Southhampton'] + ['']*12 + ['Southhampton']*2 + ['']*4 col3 = ['']*11 + ['Isle of wight'] + ['']*7 col4 = ['Met']*5 + [''] + ['Met']*13 col5 = ['']*5 + ['Partially met'] + ['']*13 col6 = ['']*19 df = pd.DataFrame(data = dict(zip(['Hampshire', 'Southhampton', 'Isle of wight', '5met', '5partially met', '5Not met'],[col1,col2,col3,col4,col5,col6]))) df = df.replace('', np.nan) df['Hampshire'] = df['Hampshire'].fillna(df['Southhampton']) df['Hampshire'] = df['Hampshire'].fillna(df['Isle of wight']) df[['Hampshire','5met','5partially met', '5Not met']].groupby(by=['Hampshire']).count() I had to generate the data for you (since you didn't post any besides the image), but I think this get's the job done. I hope this helps.
76385289
76385350
I have had to revert back to using Firebase functions V1 in order to schedule the running of my functions and also specify the runtime options including timeoutSeconds and memory in my code (written in TypeScript): const runtimeOpts = { timeoutSeconds: 540, memory: "1GB" as const, }; exports.cleanupEvents = functions .runWith(runtimeOpts) .pubsub.schedule("0 0 * * *") .timeZone("Europe/Berlin") .onRun(async () => { await cleanupOldEvents(adminDb); logger.log("Event cleanup finished"); }); Does anyone know if it is possible with Firebase functions V2 using the onSchedule syntax to also specify these runtimeOpts in code? Without needing to go into the google cloud console and manually setting it there. I have tried chaining'onSchedule' and 'runWith' together and seeing what other possibilities Emmet suggests, so far but had no luck.
Is there a way to use onSchedule and also set a custom 'timeoutSeconds' and 'memory' using Firebase functions V2?
The API documentation for onSchedule suggests that you can pass an object as the first parameter, which is a ScheduleOptions object, an extension of GlobalOptions: onSchedule({ schedule: "your-schedule-here", timeoutSeconds: your-timeout, memory: your-memory, // include other options here from SchedulerOptions or GlobalOptions }, (event) => { ... })
76382755
76383517
I want to use the ASP.NET [Range] Annotation but for the elements IEnumerables. I used the existing RangeAttribute like this: public class RangeEnumerable : RangeAttribute { /// <inheritdoc/> public RangeEnumerable(double minimum, double maximum) : base(minimum, maximum) { } /// <inheritdoc/> public RangeEnumerable(int minimum, int maximum) : base(minimum, maximum) { } /// <inheritdoc/> public RangeEnumerable([DynamicallyAccessedMembers((DynamicallyAccessedMemberTypes)(-1))] Type type, string minimum, string maximum) : base(type, minimum, maximum) { } /// <inheritdoc/> public override bool IsValid(object? value) { if (null == value) { return true; } IEnumerable<object> list = ((IEnumerable)value).Cast<object>(); foreach (object item in list) { if (!base.IsValid(item)) { return false; } } return true; } } and annotated my Parameter like this: [RangeEnumerable(MINIMUM_ANGLE, MAXIMUM_ANGLE)] public IEnumerable<Double> PhaseAnglesVoltage { get; set; } = new List<double>(); And wrote the following unit test: [Test] public void TestInvalidPhaseAngleVoltageTooLow() { // Arrange Loadpoint loadpoint1 = new Loadpoint(); loadpoint1.PhaseAnglesVoltage.Append(-1); // Act var errCount = ValidateObject(loadpoint1); // Assert Assert.AreEqual(1, errCount); } private int ValidateObject(object obj) { var validationContext = new ValidationContext(obj, null, null); var validationResults = new List<ValidationResult>(); Validator.TryValidateObject(obj, validationContext, validationResults, true); return validationResults.Count; } I expected the loop to iterate over the elements of the List I used the annotation with, but in the IsValid-Function I always get an empty List instead of one with the element appended in the test.
How can I use the ASP.NET [Range] annotation for IEnumerable elements?
Ok, I've found the error, which was in the unit test. IEnumerable.Append doesn't add the element to the original object like List.Add does (see Difference between a List's Add and Append method?). Changing the unit test to the following does the trick. [Test] public void TestInvalidPhaseAngleVoltageTooLow() { // Arrange Loadpoint loadpoint1 = new Loadpoint(); loadpoint1.PhaseAnglesVoltage = loadpoint1.PhaseAnglesVoltage.Append(-1); // Act var errCount = ValidateObject(loadpoint1); // Assert Assert.AreEqual(1, errCount); }
76383286
76383533
Let's assume we have the following XML response: <People> <Person> <Age>29</Age> </Person> <Person> <Age>25</Age> </Person> <Person> <Age>18</Age> </Person> <Person> <Age>45</Age> </Person> </People> I want an xpath 2.0 expression that will return true if there is at least one person with age between 18 and 22. My current expression is: boolean(//*:Person[xs:integer(substring(//*[local-name() = 'Age']/text(), 2)) >= 18 and 22 >= xs:integer(substring(//*[local-name() = 'Age']/text(), 2))]) But this expression is not recursive so it produces the following error: A sequence of more than one item is not allowed as the first argument of substring() ("29", "25", ...) Any idea as to how I can achieve what I need?
XPath that returns true when at least one element matches
In XPath 2.0 this is exists(//Person[Age = (18 to 22)])
76385245
76385381
I have a user-settable text, where the default one is [Log in] or [register] to view the content. What I need, is to wrap the two words in square brackets in their respective links. But first, I need to check that the user didn't change this default text, in other words that they kept the square brackets. I won't go in great lengths in checking this. Just the existence of two sets of square brackets is enough. If that's the case, then I'll assume that the first link is for the login page, and the second is for the register-an-account page... So, the if below does the job for me: if ( preg_match( '/\[(.*?)\].*\[(.*?)\]/', $text ) ) Then, inside the if, my plan was to perform a str_replace() with two arrays like this: $text = str_replace( array( '[', ']', '[', ']' ), array( '%1$s', '%2$s', '%3$s', '%4$s' ), $text ); But this doesn't work the way I thought it would. I thought that since the two arrays have equal number of elements it'd do a 1-on-1 search and replace, meaning that it would turn the text to %1$sLog in%2$s or %3$sregister%4$s to view the content, whereas it turned to %1$sLog in%2$s or %1$sregister%2$s to view the content. Why is that? If that's not the proper way to do that (which obviously isn't), what should I do instead? Any help would be very much appreciated. TIA.
Strange behavior of str_replace
Try using preg_replace with your same pattern (with additional capture): $text = preg_replace('/\[(.*?)\](.*)\[(.*?)\]/', '%1$s$1%2$s$2%3$s$3%4$s', $text); which produces %1$sLog in%2$s or %3$sregister%4$s to view the content The str_replace does not work the way you intended - the first array is an array of needles which has no sense of position so the second set of [] are duplicate needles in your case. See tester
76383155
76383536
I use ListView to dynamically display items in my JavaFX app. Items are loaded through REST call to my backend app. Each item can be clicked and then product view is displayed instead of product list. That works (looks) fine until app window is resized. After resize, items look ugly and they use too much space. The question: Is there a way to get some kind of fluid item order? In HTML and CSS that would be Flexbox if I remember well. All items would be the same width and the same height not giving a chance to calculate width or height for each item separately. The only solution I found on the internet is here: https://github.com/onexip/FlexBoxFX - but it uses FXML files only and there is no option to add items dynamically. The last project update is 6 years ago which tells me it's abandoned or poorly maintained. Their official website is dead: http://flexboxfx.io EDIT: As James_D mentioned, I don't need ListView but any other solution that works. Also I am aware of WebView but I would like to avoid HTML content in my app. To make my case more clear, I made some screenshots and the first one is edited to represent the idea what I want. Numbers on edited screenshot represent desired order of items. If window grows more in width, first row should have items 1, 2 and 3, next row 4, 5 and 6, next row 7, 8 and 9 and last row should have only one item (10). All rows should be centered and item 10 of last row should be positioned below item 8. This is the final layout I want to get but I don't know how. Everything is nice when window is not resized, but after resize, it looks ugly.
Fluid nodes list layout with JavaFX
The primary purpose of a ListView is to provide virtualization; i.e. it provides an efficient mechanism to display a large number of items, letting the user scroll through them, without the overhead of UI components for the items that are not currently displayed. It also provides some additional functionality, such as selection (allowing the user to put one or more items in the list into a "selected" state which is shown visually). If you actually need virtualization, and perhaps selection, and want a grid-like layout, then the third-party library ControlsFX provides a similarly-virtualized GridView. However, your question appears to be only about layout and your screenshots appear to show you using a Pagination control, which would (probably) obviate the need for virtualization anyway. If you don't need the functionality of the ListView, then a standard layout pane such as FlowPane or TilePane, possibly wrapped in a ScrollPane, should provide the layout you need.
76382044
76382185
I made something like this dummy class: class CreateCaseFactory: @classmethod def create(cls, user_id: uuid.UUID, type_: str) -> str: creator = cls.CASE_TO_METHOD_MAP.get(type_) if creator: return creator(user_id) else: raise Exception("Invalid type") @classmethod def _create_case_1(cls, user_id: uuid.UUID) -> str: result = f"Dummy Use Case 1 created for user {user_id}" return result @classmethod def _create_case_2(cls, user_id: uuid.UUID) -> str: result = f"Dummy Use Case 2 created for user {user_id}" return result CASE_TO_METHOD_MAP = { "case_1": _create_case_1, "case_2": _create_case_2, } but I get an error when I try to run it: if creator: > return creator(user_id) E TypeError: 'classmethod' object is not callable How can I make this factory class work.
Factory class in Python with a mapping dictionary returns TypeError
As the error message says, instances of classmethod are not callable. When you call a class method with something like CreateCaseFactory.create(...), the descriptor protocol "extracts" the underlying function from the class method and calls it with CreateCaseFactory as the first argument. create_case_1 and _create_case_2 should not be class methods, but regular functions. class CreateCaseFactory: @classmethod def create(cls, user_id: uuid.UUID, type_: str) -> str: creator = cls.CASE_TO_METHOD_MAP.get(type_) if creator: return creator(cls, user_id) else: raise Exception("Invalid type") def _create_case_1(cls, user_id: uuid.UUID) -> str: result = f"Dummy Use Case 1 created for user {user_id}" return result def _create_case_2(cls, user_id: uuid.UUID) -> str: result = f"Dummy Use Case 2 created for user {user_id}" return result CASE_TO_METHOD_MAP = { "case_1": _create_case_1, "case_2": _create_case_2, }
76384865
76385382
Why Isn't Pathfinding working I'm new to scripting and This just doesn't make sense to me I Understan my other code, and I've read the documentation but when it comes to integrating the pathfinding so he will only find/create a path when he had located the nearest player and follow that path has me stumped. To be honest after reading the documentation I would have figured I could just declare the path inside the if statement gauging if the player is close enough and he would follow the path but I looked up a tutorial and he used the GetWayPoints() but I figured that was to be to no avail so far this is my latest attempt local runService = game:GetService("RunService") -- Run Service sort of like unitys Update frame by frame, SEE DOCUMENTATION SAVED FOLDER local players = game:GetService("Players") -- Players will help "get" the players in the game local humanoid = script.Parent -- grabs the parent of the script which is humanoid local root = humanoid.Parent.PrimaryPart --root is the humanoids parent i.e models primary part which is typically the humanoid root part local PathfindingService = game:GetService("PathfindingService"); -- A path finding service local wantedDistance = 30 -- How far he can search or should be trying to search, The value now is small testing needed local stopDistance = 5 -- In caase we want to use this to make him stop (like if he had a kill radius instead of touching) local damage = 50 local attackDistance = 8 local attackWait = 1 local lastAttack = tick() function findNearestPlaya() local playerList = players:GetPlayers() local playerNearest = nil local dist = nil local direction = nil for _, player in pairs(playerList) do -- basically a for each loop that says for each player in the list of players local character = player.Character if character then -- will only eggsacute if a player/character exists local distanceV = player.Character.HumanoidRootPart.Position - root.Position -- Distance ''Vector'' equals the distance from the player torso/Root position minus the distance of the models primary part which we have as root if not playerNearest then playerNearest = player dist = distanceV.Magnitude -- distance vector magnitude gives us the actual distance direction = distanceV.Unit -- direction of the nearest player, SEE DOCUMENTATIOIN FOR UNIT AND MAGNITUDE elseif distanceV.Magnitude < dist then -- If the player is closer than the set nearest player then ''replace'' the player playerNearest = player -- resets to new player dist = distanceV.Magnitude direction = distanceV.Unit end end end return playerNearest, dist, direction -- function so return which playa and his distance and direction end -- Another call from the runService class that runs every "physics frame", Which I think is like the updateBefore in unity runService.Heartbeat:Connect(function() -- lua thing essentially this odd function call thing is just an anonymous function meaning it will execute every heartbeat local path = PathfindingService:CreatePath() local playerNearest, distance, direction = findNearestPlaya() if the distance is within range of the wanted distance if playerNearest then if distance <=wantedDistance and distance >= stopDistance then path:ComputeAsync(humanoid.PrimaryPart.Position, playerNearest.PrimaryPart.Position) local waypoints = path:GetWaypoints() for _, waypoint in pairs(waypoints) do humanoid.MoveTo(waypoint.Position) end else humanoid:Move(Vector3.new()) -- end if distance <= attackDistance and tick() - lastAttack >= attackWait then lastAttack = tick(); playerNearest.Character.Humanoid.Health -= damage end end end) I was expecting him to calculate the path to the nearest player hints my code player nearest. I really dont understand why that or just telling him to move to path after calculating it wouldnt work
PathFinding closest Player
Here is the solved version comments should do a good job of explaining but essentially I was just being dumb I was harping to much on the direction as long as you ommit that you can essentially guide the NPC as long as he is in range instead of updating every player playerNearest path, the heartbeat function is already handling that all that's required is to create a new path for each player that is nearest then guide him along that path. The script isn't perfect but this will find the nearest player and create a path for the NPC to follow each time for a different nearest player (I'm bad at explanations and I've dragged this on for too long just read the comments) local runService = game:GetService("RunService") -- Run Service sort of like unitys Update frame by frame, SEE DOCUMENTATION SAVED FOLDER local players = game:GetService("Players") -- Players will help "get" the players in the game local humanoid = script.Parent -- grabs the parent of the script which is humanoid local root = humanoid.Parent.PrimaryPart --root is the humanoids parent i.e models primary part which is typically the humanoid root part local PathfindingService = game:GetService("PathfindingService"); -- A path finding service local wantedDistance = 30 -- How far he can search or should be trying to search, The value now is small testing needed local stopDistance = 5 -- In caase we want to use this to make him stop (like if he had a kill radius instead of touching) local damage = 50 local attackDistance = 8 local attackWait = 1 local lastAttack = tick() function findNearestPlaya() local playerList = players:GetPlayers() local playerNearest = nil local dist = nil local direction = nil for _, player in pairs(playerList) do -- basically a for each loop that says for each player in the list of players local character = player.Character if character then -- will only eggsacute if a player/character exists local distanceV = player.Character.HumanoidRootPart.Position - root.Position -- Distance ''Vector'' equals the distance from the player torso/Root position minus the distance of the models primary part which we have as root if not playerNearest then playerNearest = player dist = distanceV.Magnitude -- distance vector magnitude gives us the actual distance direction = distanceV.Unit -- direction of the nearest player, SEE DOCUMENTATIOIN FOR UNIT AND MAGNITUDE elseif distanceV.Magnitude < dist then -- If the player is closer than the set nearest player then ''replace'' the player playerNearest = player -- resets to new player dist = distanceV.Magnitude direction = distanceV.Unit end end end return playerNearest, dist, direction -- function so return which playa and his distance and direction end -- Another call from the runService class that runs every "physics frame", Which I think is like the updateBefore in unity runService.Heartbeat:Connect(function() -- lua thing essentially this odd function call thing is just an anonymous function meaning it will execute every heartbeat local playerNearest, distance, direction = findNearestPlaya() -- if the distance is within range of the wanted distance if playerNearest and distance <= wantedDistance then local path = PathfindingService:CreatePath(); -- Now if hes in range of the player create a new path path:ComputeAsync(root.Position, playerNearest.Character.HumanoidRootPart.Position) -- compute the path with the positions of the player and Dr. Sturgeon local waypoints = path:GetWaypoints() -- Assign a new waypoint and and gathers the paths -- standard for each loop for the nodes SEE DOCUMENTATION/TUTORIALS TO UNDERSTAND ''NODES'' for _, waypoint in pairs(waypoints) do humanoid:MoveTo(waypoint.Position) -- For each waypoint/node move Dr. sturgeon to it until it reaches the computed path i.e the distance from the player to the robot vice a versa humanoid.MoveToFinished:Wait() end end if distance <= attackDistance and tick() - lastAttack >= attackWait then lastAttack = tick(); playerNearest.Character.Humanoid.Health -= damage end end)
76383514
76383541
I am looking for a way to pass a list to the Table.RemoveColumns() step in Power Query. Overview of the set up, two tables as data sources, one is a config table with all the column names of the second data source with simple 'yes' 'no' selectors identifying which columns should be kept/removed. This table is used as a data source, filtered by 'no', and drilled down as a list like so: I am looking for a way to pass that list to a step to remove columns in my 'data' source: So the step to remove columns: = Table.RemoveColumns(Source,{"InvoiceDate", "T/S Start Date", "TotalBreakMinutes"}) Would become: = Table.RemoveColumns(Source,{cols}) However you can't pass a list to an argument that expects text. I tried a few work arounds like adding a prefix " and suffix " to each list item and using Text.Combine with a comma separator however Table.RemoveColumns step handles the string as a single column Is there a way to pass that list as a recognisable condition for Table.RemoveColumns()?
Pass list to Table.RemoveColumns
= Table.RemoveColumns(Source,cols) where cols is a list of column names sample code let Source = #table({"Column1", "Column2","Column3","Column4"},{{"A","B","C","D"}}), removetable = #table({"Column1"},{{"Column1"},{"Column2"}}), removelist = removetable[Column1], #"Removed Columns" = Table.RemoveColumns(Source,removelist) in #"Removed Columns"
76382000
76382205
I've been provided with the json files generated by swashbuckle for a rest api I should be consuming and I was wondering if there are tools that can take those files as input and allow an easier navigation of exposes methods, request payloads, response payloads, headers, etc. Also when working in .NET is there a way or tool to generate payload classes as with wsdl documents?
Swagger: how to use the generated json files?
You can use the swagger editor https://editor.swagger.io/ This will allow you to view and browse the methods. Simply paste the contents of your received JSON. Then also, at the top of the page you have "generate client" options for different languages. Which will generate C# (or other) langauge files for you.
76378414
76383565
Im very new to working with GIS data (using Dash Leaflet and GeoPandas) and am currently stumped. My goal is to create a simple app which does the following: App starts with an empty dash_leaflet.Map() figure and a numeric input box titled "Buffer Distance" (with a default of 100) User draws a polygon on the map which fires a callback Callback takes in the GeoJSON data from the map and the "buffer distance" Use Geopandas to import the GeoJSON data and create a new polygon which is smaller than the user drawn polygon by "Buffer Distance" Pass these 2 polygons (originally drawn & post processed polygon with buffer) back to the map so that both are now displayed on the map Im having trouble with the last step of pushing the two polygons back the map via some kind of Output This is the app i am currently working with: import pandas as pd from dash import Dash, dcc, html, Input, Output, State import dash_leaflet as dl import geopandas as gpd lat1, lon1 = 36.215487, -81.674006 app = Dash() input_details = html.Div([ html.Div([ html.Div(['Buffer Distance'], style={'width': '37%', 'display': 'inline-block'}), dcc.Input( value=100, id="buffer-distance", type='number', placeholder='Required', ), ]), ]) default_map_children = [ dl.TileLayer(), dl.FeatureGroup([ dl.EditControl(id="edit_control"), ]), dl.GeoJSON(id='map-geojsons') ] map_input_results_tab = html.Div( [ html.H2('Add Shapes to Map an Area of Interest'), dl.Map( id='leaflet-map', style={'width': '100%', 'height': '50vh'}, center=[lat1, lon1], zoom=16, children=default_map_children ) ]) app.layout = html.Div([input_details, map_input_results_tab]) @app.callback( Output('map-geojsons', 'data'), Input('edit_control', 'geojson'), State('buffer-distance', 'value'), ) def update_estimates(drawn_geojson, perim_clear): if any([x is None for x in [drawn_geojson, perim_clear]]): # some value has not been provided, so do not continue with calculations return drawn_geojson elif not drawn_geojson["features"]: # some value has not been provided, so do not continue with calculations return drawn_geojson gdf = gpd.GeoDataFrame.from_features(drawn_geojson["features"]) # extract user drawn geometry data from UI gdf = gdf.set_crs(crs=4326) # Set the initial CRS to specify that this is lat/lon data gdf = gdf.to_crs( crs=gdf.estimate_utm_crs()) # Let GeoPandas estimate the best CRS and use that for the area calculation # create a new geodataframe using buffer that incorporates the perimeter gdf_minus_perim_buffer = gdf['geometry'].buffer(-perim_clear) combine_gdf = pd.concat([gdf['geometry'], gdf_minus_perim_buffer]) # convert back to lat, long combine_gdf = combine_gdf.to_crs(crs=4326) # convert back to GeoJSON to be rendered in the dash leaflet map return_geojson_data = combine_gdf.to_json() return return_geojson_data if __name__ == '__main__': app.run_server(debug=True, port=8052) I think I am close, but am just missing something.. Thanks in advance for any help!
Add New Polygon to Dash Leaflet Map via a Callback
It looks like the callback approach above is valid, I was just providing the wrong data type back to the dl.GeoJSON's data attribute . Changing this line: # convert back to GeoJSON to be rendered in the dash leaflet map return_geojson_data = combine_gdf.to_json() to # convert back to GeoJSON to be rendered in the dash leaflet map return_geojson_data = combine_gdf.__geo_interface__ worked perfectly!
76385364
76385392
When upgrading AWS RDS aurora postgresql cluster from 11.17 -> 15.2, I was met with this fatal error in the pg_upgrade logs: fatal Your installation contains user-defined objects that refer to internal polymorphic functions with arguments of type "anyarray" or "anyelement". These user-defined objects must be dropped before upgrading and restored afterwards, changing them to refer to the new corresponding functions with arguments of type "anycompatiblearray" and "anycompatible". AWS does not mention this in the upgrade docs, so I thought the changed may have been introduced by a system user. After a bit of digging, it seems that the aggregate functions changed the way the types are named (in postgresql version 14 to be clear). So how do I update this? I ran a subset the query that the upgrade failed on, on each DB in the target cluster: --find incompatibilites on each DB: \c <DATABASE> SELECT 'aggregate' AS objkind, p.oid::regprocedure::text AS objname FROM pg_proc AS p JOIN pg_aggregate AS a ON a.aggfnoid=p.oid JOIN pg_proc AS transfn ON transfn.oid=a.aggtransfn WHERE p.oid >= 16384 AND a.aggtransfn = ANY(ARRAY['array_append(anyarray,anyelement)', 'array_cat(anyarray,anyarray)', 'array_prepend(anyelement,anyarray)', 'array_remove(anyarray,anyelement)', 'array_replace(anyarray,anyelement,anyelement)', 'array_position(anyarray,anyelement)', 'array_position(anyarray,anyelement,integer)', 'array_positions(anyarray,anyelement)', 'width_bucket(anyelement,anyarray)']::regprocedure[]); objkind | objname -----------+------------------------- aggregate | array_accum(anyelement) (1 row) Okay, so now what?
Updating "anyarray" or "anyelement" polymorphic functions when upgrading to 14.x or higher on AWS RDS aurora postgresql
Solution: --drop aggregate from sub 14.x db mygreatdatabase=> DROP AGGREGATE array_accum(anyelement); DROP AGGREGATE --upgrade to 14.x or higher, and then re-create using updated type: mygreatdatabase=> CREATE AGGREGATE array_accum(anycompatible) (SFUNC = array_append,STYPE = anycompatiblearray,INITCOND = '{}'); My hope is that AWS adds this to the documentation on RDS Aurora PostgresQL Upgrade Pre-Checks, but this will be here until that is more clear.
76382135
76382208
I am trying to delete rows based on one columns value! But the column length of the range in the worksheet is dynamic and large. For example, If Col C has value less than or equal to 0 that row gets deleted A B C 1 SAM 100 1 SAM 0 1 BRI -100 1 HAWK 100 It should only give me : A B C 1 SAM 100 1 HAWK 100
How to delete rows based on columns' value?
as said in my comment,try this: Sub test() Dim LR As Long Dim i As Long LR = Range("A" & Rows.Count).End(xlUp).Row 'get last non blank row number For i = LR To 1 Step -1 'go backwards starting at LR until row 1 If Range("C" & i).Value <= 0 Then Range("C" & i).EntireRow.Delete Next i End Sub Before code: After code executed:
76383277
76383567
My buttons inside bootstrap columns not appearing when the screen size is small. I wanted the buttons to appear one below the other when screen size is small. What changes should I make to get my buttons one below each other on a small screen. full screen small screen adding the html code below: <body> <div class="top"> <div class="content"> <h1>Welcome To <br /><span class="fancy">Fantasy Talk</span></h1> <div class="row"> <div class="col-md-4"> <button onClick="window.location.href='dothraki.html';"> Dothraki </button> </div> <div class="col-md-4"> <button onClick="window.location.href='valyrian.html'; "> Valyrian </button> </div> <div class="col-md-4"> <button onClick="window.location.href='sindarin.html';"> Sindarin </button> </div> </div> </div> </div> </body> **Adding CSS code: ** body{ margin: 0; padding: 0; width: 100%; height: 100vh; /* cursor: url(https://cur.cursors-4u.net/games/gam-11/gam1090.png),auto; */ cursor: url(https://cur.cursors-4u.net/games/gam-13/gam1229.png),auto; } .top{ width: 100%; height: 100%; padding: 2rem; position:absolute; background-image: url(image/img1.jpg); background-position: center; background-size: cover; text-align: center; justify-content: center; animation: change 13s infinite ease-in-out; } button{ font-family: 'Almendra SC', serif; transition: 0.5s; padding: 15px 60px; text-decoration: none; font-size: 2vw; position: absolute; border-radius: 5px; top: 50%; transform: translate(-50%, -50%); border: 1px; transition: all 0.2s ease-in-out; color: rgba(255, 255, 255, 0.8); background: #146C94; } button:hover{ margin-top: -10px; color: rgba(255, 255, 255, 1); /* box-shadow: 0 5px 15px rgba(145, 92, 182, .4); */ box-shadow: 0 5px 15px #39B5E0; } @keyframes change{ 0%{ background-image: url(image/img6.jpg); } 20%{ background-image: url(image/img2.jpg); } 40%{ background-image: url(image/img3.jpg); } 60%{ background-image: url(image/img4.jpg); } 80%{ background-image: url(image/img5.jpg); } 100%{ background-image: url(image/img6.jpg); } } h1{ padding: 2rem; color:white; font-family: 'Spirax', cursive; font-size: 5vw; text-transform: uppercase; text-align: center; line-height: 1; } .fancy{ font-size: 8vw; }
My bootstrap column not working on small screen
Instead of trying to position each button individually in the middle of the page, position the entire row of buttons. This will allow you to use Bootstrap columns better. You were also missing the the extra-small (col-12) and small (col-sm-12) column breakpoints. Replace your HTML with this (Bootstrap 5): <div class="row gy-3 position-absolute top-50 start-50 translate-middle w-100"> <div class="col-12 col-sm-12 col-md-4 text-center"> <button onClick="window.location.href='dothraki.html';"> Dothraki</button> </div> <div class="col-12 col-sm-12 col-md-4 text-center"> <button onClick="window.location.href='valyrian.html';">Valyrian</button> </div> <div class="col-12 col-sm-12 col-md-4 text-center"> <button onClick="window.location.href='sindarin.html';">Sindarin</button> </div> </div> And then replace your button CSS with this (removing the button positioning): button { font-family: 'Almendra SC', serif; transition: 0.5s; padding: 15px 60px; text-decoration: none; font-size: 2vw; border-radius: 5px; border: 1px; transition: all 0.2s ease-in-out; color: rgba(255, 255, 255, 0.8); background: #146C94; } Also, I recommend you replace your buttons with Anchor tags if they are only going to take the user to a page.
76383962
76385414
I have a list of activities that is generated dynamically with javascript in the following manner: const renderList = (activities) => { const display = document.getElementById('task-list-display'); activities.forEach((activity) => { console.log(activity); display.insertAdjacentHTML('beforeend', ` <li class="task-item draggable" draggable="true"> <div class="chk-descr"> <input data-a1="${activity.index}" type="checkbox" name="completed"/> <p data-b1="${activity.index}" class="description" contenteditable="true">${activity.description}</p> </div> </li> `); I want to have it be responsive in a way that the items can be rearranged using drag and drop. I am not able, however, to make this work. Previously I had designed the very same app but instead of having the items of the list be inserted using insertAdjacentHTML() I was creating each element using createElement() and then appending it to the corresponding HTML element using appendChild(). The drag and drop functionality on that app was fully functioning. My question is: is there some reason why drag and drop might not work with a dynamically generated list using insertAdjacentHTML?. Here is all the relevant code: let activities = []; const dragstart = (element) => { element.classList.add('skateover'); }; const dragover = (element, e) => { e.preventDefault(); element.classList.add('dragover'); }; const dragleave = (element) => { element.classList.remove('dragover'); }; const drop = (element) => { const skateover = document.querySelector('.skateover'); element.before(skateover); repopulateList(); element.classList.remove('dragover'); }; const dragend = (element) => { element.classList.remove('skateover'); }; const repopulateList = () => { const listItems = document.querySelectorAll('.draggable'); emptyList(); let i = 0; listItems.forEach((listItem) => { listItem.setAttribute('activity', i); i += 1; const description = listItem.getElementsByClassName('description')[0].textContent; const completed = listItem.getElementsByClassName('completed')[0].checked; const index = listItem.getAttribute('activity'); inputActivity(description, completed, index); }); }; const inputActivity = (description, completed, index) => { activities.push({ description, completed, index: parseInt(index, 10) }); }; And in the HTML file: <ul id="task-list-display"></ul>
Drag and drop not functioning on dynamically created list
That is not a problem. Here I add the eventlisteners to the unordered list (<ul>). So, the adding, cloning and removing of list items (<li>) is not an issue. There is no problem in using methods like insertAdjacentHTML(). In this example I just use cloneNode() for cloning the node that is moved and then insertBefore() to insert the cloned node before the list item that is hovered/dropped on. const aktivities = [{ index: 1, description: "Item 1" }, { index: 2, description: "Item 2" }, { index: 3, description: "Item 3" } ]; const display = document.getElementById('task-list-display'); const renderList = (activities) => { activities.forEach((activity) => { display.insertAdjacentHTML('beforeend', ` <li class="task-item draggable" draggable="true" data-id="${activity.index}"> <div class="chk-descr"> <input data-a1="${activity.index}" type="checkbox" name="completed"/> <p data-b1="${activity.index}" class="description" contenteditable="true">${activity.description}</p> </div> </li> `) }); }; display.addEventListener("dragstart", e => { e.dataTransfer.setData("text/plain", e.target.dataset.id); }); display.addEventListener("dragover", e => { e.preventDefault(); [...display.querySelectorAll('li')].forEach(li => li.classList.remove('over')); e.target.closest('li.task-item').classList.add('over'); }); display.addEventListener("drop", e => { e.preventDefault(); [...display.querySelectorAll('li')].forEach(li => li.classList.remove('over')); let original = document.querySelector(`li[data-id="${e.dataTransfer.getData("text/plain")}"]`); let clone = original.cloneNode(true); let target = e.target.closest('li.task-item'); display.insertBefore(clone, target); display.removeChild(original); }); renderList(aktivities); ul { margin: 0; padding: 0; list-style: none; } li div { display: flex; flex-direction: row; } .over { border-top: solid thin black; } <ul id="task-list-display"></ul>
76383234
76383601
I'm trying to write a generic env field getter function and currently have this: export interface Config { readonly PORT: number; readonly DATABASE_URL: string; // ... other fields } const config: Config = Object.freeze({ ENVIRONMENT, PROJECT_NAME, PORT: parseInt(getEnvVariable('PORT', '9000'), 10), DATABASE_URL: getEnvVariable('DATABASE_URL'), AWS_REGION, }); function getEnvVariable<T>(name: string, defaultValue?: T): T | string { const val = process.env[name]; if (val) { return val; } if (defaultValue) { return defaultValue; } throw new Error(`Missing environment variable: ${name}`); } and I'm currently getting the error: Type 'Readonly<{ ENVIRONMENT: string; PROJECT_NAME: string; PORT: number; DATABASE_URL: unknown; AWS_REGION: string; }>' is not assignable to type 'Config'. Types of property 'DATABASE_URL' are incompatible. Type 'unknown' is not assignable to type 'string'. For some reason if my defaultValue input is undefined, it will give this error, even though I do a truthy check. How would I fix this without using an '' as the defaultValue?
Typescript returns unknown for generic only on undefined input
If you don't pass in a defaultValue argument to getEnvVariable, then the compiler has no inference site for the generic type parameter T. So inference fails, and T falls back to its constraint, which is implicitly the unknown type. If you'd like T to fall back to something else, you can use a default type argument as shown here: function getEnvVariable<T = string>( // default type arg ----> ^^^^^^^^ name: string, defaultValue?: T ): T | string { /* impl */ } Then you'll get the desired behavior when defaultValue isn't supplied: const unsuppliedDefault = getEnvVariable("Y"); // const unsuppliedDefault: string without changing the behavior when it is: const suppliedDefault = getEnvVariable("X", Math.random()); // const suppliedDefault: string | number Playground link to code
76382016
76382233
Task: to create a tournament bracket according to the double elimination system. For the upper bracket, there were no problems, since the teams that won in the first round meet in the second, and so on. But for the lower bracket, in addition to the games among the losers in the first round, you need to add games among the losers in the second round of the upper bracket. For example, given an array of lower bracket games: const games = [ { id: 1, home_name: "Team 1", visitor_name: "Team 3", home_score: 1, visitor_score: 0, round: 2 }, { id: 2, home_name: "Team 6", visitor_name: "Team 7", home_score: 1, visitor_score: 0, round: 2 }, { id: 3, home_name: "Team 1", visitor_name: "Team 6", home_score: 0, visitor_score: 1, round: 3 }, { id: 4, home_name: "Team 4", visitor_name: "Team 5", home_score: 1, visitor_score: 0, round: 3 }, { id: 5, home_name: "Team 6", visitor_name: "Team 4", home_score: 1, visitor_score: 0, round: 4 }, ]; To display the structure of the lower bracket, you need to get the following object: { name: "Team 6", children: [ { name: "Team 4", children: [{ name: "Team 4" }, { name: "Team 5" }], }, { name: "Team 6", children: [ { name: "Team 1", children: [{ name: "Team 1" }, { name: "Team 3" }] }, { name: "Team 6", children: [{ name: "Team 6" }, { name: "Team 7" }] }, ], }, ], } In the parent object "Team 6" is the winner of the lower bracket, this can be understood from the last (4) round played between Team 4 and Team 6, in turn Team 4 is determined by the game between Team 4 and Team 5 in round 3. What the lower bracket will look like for these games: How to get such an object from the games array? The number of games may vary depending on the number of teams.
Recursive object construction
You could use a plain object to collect subtrees keyed by the winning team's name, starting out with an empty object. Then iterate the games in order of round and look up the two team's subtrees from that object (with a default {name} object). Then construct the children property from that and wrap it into a new root node. Register that node in the collection of subtrees. Finally retain the last object that was created which will have the whole tree: function getTree(games) { // Ensure the data is sorted by round -- if this is already ensured by caller, then you can drop this statement: games = [...games].sort((a, b) => a.round - b.round); const nodes = {}; // This will collect the subtrees, keyed by the winning team's name. let winner; for (const {home_name, home_score, visitor_name} of games) { const name = home_score ? home_name : visitor_name; winner = nodes[name] = { name, children: [ nodes[home_name] ?? { name: home_name }, nodes[visitor_name] ?? { name: visitor_name } ] }; } return winner; } const games = [ { id: 1, home_name: "Team 1", visitor_name: "Team 3", home_score: 1, visitor_score: 0, round: 2 }, { id: 2, home_name: "Team 6", visitor_name: "Team 7", home_score: 1, visitor_score: 0, round: 2 }, { id: 3, home_name: "Team 1", visitor_name: "Team 6", home_score: 0, visitor_score: 1, round: 3 }, { id: 4, home_name: "Team 4", visitor_name: "Team 5", home_score: 1, visitor_score: 0, round: 3 }, { id: 5, home_name: "Team 6", visitor_name: "Team 4", home_score: 1, visitor_score: 0, round: 4 }, ]; console.log(getTree(games));
76385383
76385418
I have this dataset ID Name 101 DR. ADAM SMITH 102 BEN DAVIS 103 MRS. ASHELY JOHNSON 104 DR. CATHY JONES 105 JOHN DOE SMITH Desired Output ID Name 101 ADAM SMITH 102 BEN DAVIS 103 ASHELY JOHNSON 104 CATHY JONES 105 JOHN DOE SMITH I need to get rid of the prefix I tried df['Name'] = df['Name'].replace(to_replace = 'DR. ', value = '')I repeated the same code for all prefixes, but I have when I do it nothing happens. Any reason for this? Thank you in advance.
Removing Prefix from column of names in python
Use a regular expression to match the first word if it ends with .. df['Name'] = df['Name'].str.replace(r'^[A-Z]+\.\s+', '', regex=True)
76383593
76383613
fn main() { let foo = 5; std::thread::spawn(|| { // closure may outlive the current function, but it borrows `foo`, which is owned by the current function println!("{}", foo); }) .join() .unwrap(); } Moving the value is not an option since it have to make multiple threads The situation in my code is a bit more complicated, but I still need threads and I ended up moving and Arc into it instead of just a reference Here is a link to the line in the project, but you don't have to read it: https://github.com/Antosser/web-crawler/blob/5d23ffa7ed64c772080c7be08a26bda575028c7c/src/main.rs#L291
Closure might outlive current function even though it is joined
The compiler does not know it is joined. It does not apply any special analysis to see if threads are joined. However, if you join your threads, you can use scoped threads to access variables: fn main() { let foo = 5; std::thread::scope(|s| { s.spawn(|| { println!("{}", foo); }); // Thread implicitly joined here. }); }
76383571
76383624
I wanted to perform a mathematical function on each unique item in a data frame dynamically. Normally to perform a mathematical function, we use mutate statement and create a column and perform the mathematical function manually by writing mutate statement after mutate statement. Which is feasible on a few columns. But what if I have 100 columns and I have to perform 2-5 mathematical function, For example: one would be 20% increase on the initial number, The other one would be to divide the initial number by 2 on each column and keep the original column as is. Is this possible in R other than writing mutate statement for each specific item? The data frame I am working with is: structure(list(`Row Labels` = c("2023-03-01", "2023-04-01", "2023-05-01", "2023-06-01", "2023-07-01", "2023-08-01", "2023-09-01", "2023-10-01" ), X6 = c(14, 16, 14, 11, 9, 9, 11, 11), X7 = c(50, 50, 50, 50, 50, 50, 50, 50), X8 = c(75, 75, 75, 75, 75, 75, 75, 75), X9 = c(100, 100, 100, 100, 100, 100, 100, 100), X11 = c(25, 25, 50, 75, 125, 200, 325, 525), X12 = c(50, 50, 100, 150, 250, 400, 650, 1050 )), class = c("tbl_df", "tbl", "data.frame"), row.names = c(NA, -8L)) For individual cases this code would suffice: library(readxl) library(dplyr) Book1 <- read_excel("C:/X/X/X- X/X/Book1.xlsx",sheet = "Sheet6") dput(Book1) Book1 <- Book1 %>% mutate(`X6 20%` = X6*1.20) %>% mutate(`X6 by 2`= X6/2) I was thinking of running this through a loop but then selection of columns to multiple becomes a problem as we have to specify the column name in mutate statement, which I believe would not be possible here right. Can anyone let me know if this can be achieved in a simple approach? The expected output is given below:
Perform a specific Mathematical Function on each column dynamically in R
We could use across() update: shorter: library(dplyr) df %>% mutate(across(2:7, list("20" = ~. * 1.20, "By_2" = ~. / 2), .names = "{col}_{fn}")) first answer: library(dplyr) df %>% mutate(across(2:7, ~. * 1.20, .names = "{.col}_20%"), across(2:7, ~. /2, .names = "{.col}_By 2")) `Row Labels` X6 X7 X8 X9 X11 X12 `X6_20%` `X7_20%` `X8_20%` `X9_20%` `X11_20%` `X12_20%` `X6_By 2` `X7_By 2` `X8_By 2` `X9_By 2` `X11_By 2` `X12_By 2` <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 2023-03-01 14 50 75 100 25 50 16.8 60 90 120 30 60 7 25 37.5 50 12.5 25 2 2023-04-01 16 50 75 100 25 50 19.2 60 90 120 30 60 8 25 37.5 50 12.5 25 3 2023-05-01 14 50 75 100 50 100 16.8 60 90 120 60 120 7 25 37.5 50 25 50 4 2023-06-01 11 50 75 100 75 150 13.2 60 90 120 90 180 5.5 25 37.5 50 37.5 75 5 2023-07-01 9 50 75 100 125 250 10.8 60 90 120 150 300 4.5 25 37.5 50 62.5 125 6 2023-08-01 9 50 75 100 200 400 10.8 60 90 120 240 480 4.5 25 37.5 50 100 200 7 2023-09-01 11 50 75 100 325 650 13.2 60 90 120 390 780 5.5 25 37.5 50 162. 325 8 2023-10-01 11 50 75 100 525 1050 13.2 60 90 120 630 1260 5.5 25 37.5 50 262. 525
76382036
76382266
I have a random seed set at the start of my run for reproducibility. But there are a few sub-functions (e.g. rando) that also use random numbers. If I used a different random number seed just for those, it affects the random seed outside of the function. Is it possible to set the random seed and use it only locally inside the function and the random state outside the function does not get affected? I believe I can always get the random state, save it and restore it. Would there be an easier option? I showed an example below. import numpy as np def rando(): np.random.seed(420) np.random.randint(1, 100) np.random.randint(1, 100) return None np.random.seed(69) for n in range(3): np.random.randint(1,100) # outputs : 55,76,74 for n in range(3): np.random.randint(1,100) # outputs : 91,56,21 Is it possible to make the function below also output the same thing? np.random.seed(69) for n in range(3): np.random.randint(1,100) # outputs : 55,76,74 rando() for n in range(3): np.random.randint(1,100) # would like it to output : 91,56,21
Python Local random seed
That's why there are numpy random generators and that is why they recommend using that. Just define one generator for each instance, e.g.: def rando(rng): print('function') print(rng.integers(1, 100)) print(rng.integers(1, 100)) print('end of function') return None rng1 = np.random.default_rng(69) rng2 = np.random.default_rng(420) for n in range(3): print(rng1.integers(1, 100)) # outputs : 6,58,67 rando(rng2) # outputs 62, 77 for n in range(3): print(rng1.integers(1, 100)) # would like it to output : 53,78,86 yielding: 6 58 67 function 62 77 end of function 53 78 86 and when you comment out the function call, you get: 6 58 67 53 78 86
76385267
76385442
I'm making a site using Remix where I'd like to persist session values across pages. I think the issue lies in the getSession request, as the values do not persist across requests to the same page. I have implemented a session cookie in sessions.ts: const { getSession, commitSession, destroySession } = createCookieSessionStorage<SessionData, SessionFlashData>( { //cookie options to create a cookie cookie: { name: "__session", maxAge: 1200, path: "/", sameSite: "none", secure: true, secrets: ["surprise"] }, } ); On one page I set a value and log it out and receive the expected value export const loader = async ({ request }: LoaderArgs) => { const session = await getSession( request.headers.get("Cookie") ); session.set("token", "abc123") var data = { "count": 2 } console.log(session.get("token")) return json(data, { headers: { "Set-Cookie": await commitSession(session), }, }); }; however when i try to access the value in a different page, the value is undefined export const loader = async ({ request }: LoaderArgs) => { const session = await getSession( request.headers.get("Cookie") ); var data = { "abc": 442 } console.log(session.get("token")) return json(data, { headers: { "Set-Cookie": await commitSession(session), }, }); return null }; I'm very new to remix and react so appreciate any help!
Remix session values don't persist across pages
the issue was to do with the sameSite option and Secure options. as I am working locally, Secure must be set to false which means sameSite must be either lax or strict
76381732
76382299
I have a problem, where I have to find the optimal cost of 3 given motor. Motor 1 has a range of 100 - 300 Motor 2 has a range of 400 - 1000 Motor 3 has a range of 50 - 250 They have a target value of 600 Motor 1 price is 5000 Motor 2 price is 5500 Motor 3 price is 5250 The equation looks like this: Cost = Motor1 * 5000 + Motor2 * 5500 + Motor3 * 5250. And a very important part, NOT every motor needs to run. I have a python code, that can calculate it, but I can give it to it that not every motors needs to be inclued. Here is the code: from pulp import LpProblem, LpVariable, LpMinimize def find_lowest_cost(): # Define the problem problem = LpProblem("Motor Optimization", LpMinimize) # Define the decision variables x = LpVariable("Motor1", lowBound=100, cat='Integer') # Power of motor 1 y = LpVariable("Motor2", lowBound=0, cat='Integer') # Power of motor 2 z = LpVariable("Motor3", lowBound=50, cat='Integer') # Power of motor 3 # Define the objective function (cost) problem += x * 5000 + y * 5500 + z * 5250 # Define the constraints problem += x >= 100 # Motor 1 lower bound problem += x <= 300 # Motor 1 upper bound problem += y >= 350 # Motor 2 lower bound problem += y <= 1000 # Motor 2 upper bound problem += z >= 50 # Motor 3 lower bound problem += z <= 250 # Motor 3 upper bound problem += x + y + z == 500 # Total power constraint # Solve the problem problem.solve() # Retrieve the optimal solution lowest_cost = problem.objective.value() best_combination = (x.value(), y.value(), z.value()) return lowest_cost, best_combination cost, combination = find_lowest_cost() print("Lowest cost:", cost) print("Motor combination:", combination) I tried to add 'or' to the "Define the Constraints' part, but it did not help problem += x >= 100 or x ==0 # Motor 1 lower bound problem += x <= 300 # Motor 1 upper bound problem += y >= 350 or y == 0 # Motor 2 lower bound problem += y <= 1000 # Motor 2 upper bound problem += z >= 50 or z == 0 # Motor 3 lower bound problem += z <= 250 # Motor 3 upper bound problem += x + y + z == 500 # Total power constraint So my Questions is, how to implement that 'OR' into my code. Thank you in advance
How to put 'or' into contraints in Pulp in python
I make some assumptions: Continuous power of motors, not integral Observe the minima in your variable bounds, not the redundant and inconsistent constraints added later Use 500 as a target, not 600 You need binary selection variables, like this: from pulp import LpProblem, LpVariable, LpMinimize, LpContinuous, lpDot, LpBinary, lpSum powers = ( LpVariable('Motor1', cat=LpContinuous, upBound=300), LpVariable('Motor2', cat=LpContinuous, upBound=1000), LpVariable('Motor3', cat=LpContinuous, upBound=250), ) used = LpVariable.matrix(name='MotorUsed', cat=LpBinary, indices=range(len(powers))) problem = LpProblem(name='Motor_Optimization', sense=LpMinimize) problem.objective = lpDot(powers, (5000, 5500, 5250)) problem.addConstraint(name='target', constraint=lpSum(powers) == 500) for power, power_min, use in zip( powers, (100, 0, 50), used, ): problem.addConstraint(power >= power_min*used) problem.addConstraint(power <= 1000*used) problem.solve() combination = [p.value() for p in powers] print('Lowest cost:', problem.objective.value()) print('Motor combination:', combination) Result - Optimal solution found Objective value: 2550000.00000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.01 Time (Wallclock seconds): 0.01 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.01 (Wallclock seconds): 0.01 Lowest cost: 2550000.0 Motor combination: [300.0, 0.0, 200.0]
76383521
76383631
I am trying to calculate future dates by adding a column with number of days df['num_days'] to another column df["sampling_date"] but getting Overflow in int64 addition. Source code- df['sampling_date']=pd.to_datetime(df['sampling_date'], errors='coerce') df['future_date'] = df['sampling_date'] + pd.to_timedelta(df['num_days'], unit='D') df['future_date'] = pd.to_datetime(df['future_date']).dt.strftime('%Y-%m-%d') df['future_date'] = df['future_date'].astype(np.str) df['future_date'] = np.where(df['num_days']<=0,0, df['future_date']) for column df['num_days'], the values are as follows [0, 866, 729, 48357555, 567, 478] I am trying to run this in unix server. Please help me resolving it.
How to fix - Overflow in int64 addition
The issue is this value: 48357555 You can create a simple function as shown below to return NaT if error is thrown: import numpy as np import pandas as pd # Here is an example df df = pd.DataFrame({ 'sampling_date': ['2022-01-01', '2022-02-01', '2022-03-01', '2022-04-01', '2022-05-01', '2022-06-01'], 'num_days': [0, 866, 729, 48357555, 567, 478] }) df['sampling_date'] = pd.to_datetime(df['sampling_date'], errors='coerce') def calculate_future_date(row): try: return row['sampling_date'] + pd.to_timedelta(row['num_days'], unit='D') except: return pd.NaT # Apply the function to each row df['future_date'] = df.apply(calculate_future_date, axis=1) df['future_date'] = np.where(df['num_days'] <= 0, df['sampling_date'], df['future_date']) df['future_date'] = df['future_date'].dt.strftime('%Y-%m-%d').replace(pd.NaT, '0').astype(str) print(df) sampling_date num_days future_date 0 2022-01-01 0 2022-01-01 1 2022-02-01 866 2024-06-16 2 2022-03-01 729 2024-02-28 3 2022-04-01 48357555 0 4 2022-05-01 567 2023-11-19 5 2022-06-01 478 2023-09-22
76383630
76383660
Which option is better to remove log files on aws s3 for rails7? s3 automation vs cron job I wrote some rake tasks. task :delete_stale_logs do s3 = Aws::S3::Resource.new( region: ENV['AWS_REGION'], access_key_id: ENV['AWS_ACCESS_KEY_ID'], secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'] ) bucket = s3.bucket(ENV['AWS_BUCKET']) bucket.objects.each do |object| if object.key.include?('.log') && object.last_modified < Time.now - 30.days object.delete puts "Deleted #{object.key}" end end end end
Remove log files on aws s3 (rails7)
Cron job is better in case you have some conditional logics for the number of logs, otherwise s3 automation. If you go for cron jobs, you have to handle monitoring to check if jobs fail or not. Check here for more info.
76385395
76385443
I am creating a simple search bar element in React JSX. I'm trying to render a list of elements that include whatever is in a search query. I basically take an array of all the elements and then I use .filter() function to find everything that includes the query. After that I use .map() function to loop through the results and render elements for each object. I needed to create two different functions for two different datasets as one is an array deeper. <ul> { this.state.searchQuery && this.props.searchDB.projects.filter((project)=> { if (this.state.searchQuery === '' || this.state.searchQuery === null) { return project; } else if (project.projectName.toLowerCase().includes(this.state.searchQuery.toLowerCase())) { return project; } else { return null } }).map((project, index) => { //THIS WORKS AS EXPECTED return( <li key={'project_' + index}> {index + '_' + project.projectName} </li> ) }) } { this.state.searchQuery && this.props.searchDB.mentions.forEach((mentionYear) => { mentionYear.filter((mention) => { if (this.state.searchQuery === '' || this.state.searchQuery === null) { return mention } else if (mention.mentionTitle.toLowerCase().includes(this.state.searchQuery.toLowerCase())) { return mention } else { return null } }).map((mention, mentionIndex) => { console.log(mention.mentionTitle) //THIS LOGS DATA AS IT SHOULD BUT DOESN'T RENDER ELEMENTS return( <li key={'project_' + mentionIndex}> {mentionIndex + '_' + mention.mentionTitle} </li> ) }) } ) } </ul> The first function works fine and returns a element as it should. For some reason the second one does not and it doesn't return any element at all, even though it is basically the same code. Strange is that the data is there, I can log it from the map function and I can see that it is filtered properly. Can someone explain to me what's wrong? I tried quite a lot of possibile mistakes I could have made but I didn't find anything.
What could be causing my second React JSX function to fail to return elements, despite properly filtered data?
The second version never does anything with the result of .map(). It's invoked inside a forEach() callback, but its result is discarded. Contrast that to the first version where the result of .map() is part of the JSX and rendered. Don't use forEach() for this. If the intent is that the .forEach() iteration should produce a result just like in the first version, then what you want isn't forEach(). What you want is .map(). For example: { this.state.searchQuery && this.props.searchDB.mentions.map((mentionYear) => { return mentionYear.filter((mention) => { // etc. }).map((mention, mentionIndex) => { // etc. }); }) } Basically, repeat the same pattern/structure you already know works when iterating over a collection in JSX to produce a rendered result.
76383595
76383682
I wanted to print out a key at a specific position (like 1) in a dictionary, but the code didn't seem to work at all. Hobbies={ football(american):1 baseball:2 basketball:3 playing_cards:4 swimming:5 soccer:7 } I used this line : print (Hobbies[1]) But it got an error How should I fix it?
Is there a way to get the key at a specified position?
First off, you probably shouldn't do this, because this is not how dictionaries were intended to be accessed. But if you really need to do this, probably the most straightforward way is to get the list of keys from the dictionary, and then access the dictionary using the first key. Something like: list_of_keys = [key for key in Hobbies.keys()] key_of_interest = list_of_keys[0] value_of_interest = Hobbies[key_of_interest] Or as a one-liner: value_of_interest = Hobbies[[key for key in Hobbies.keys()][0]] This may also work, but I'm not sure if the order of values is guaranteed the same way the order of keys is. It probably is, but I can't say for sure: value_of_interest = [value for value in Hobbies.values()][0]
76382170
76382315
I Got Element implicitly has an 'any' type because index expression is not of type 'number'.ts(7015) error I have Improted this file import { useAmazon } from "@context/amazon"; I have Used hear const { data, current_product } = useAmazon(); and Got error const main_image = data[current_product].main_image; // error in current_product @context/amazon file import { JSObject } from "@classes/JSObject"; import React, { useState, useEffect, useContext, createContext } from "react"; import axios from "axios"; import { Product } from "types"; type AmazonContextType = { data: Product[] | null; current_product: string; setProduct: (product: string) => void; offers: JSObject | null; setOffers: (offer: JSObject | null) => void; }; type Props = { children: React.ReactNode; }; const amazonDefault: AmazonContextType = { data: null, current_product: '', setProduct: () => undefined, offers: null, setOffers: () => undefined, }; const AmazonContext = createContext<AmazonContextType>(amazonDefault); export function useAmazon() { return useContext(AmazonContext); } export function AmazonProvider({ children }: Props) { const [data, setData] = useState(amazonDefault.data); const [current_product, setProduct] = useState(amazonDefault.current_product); const [offers, setOffers] = useState(amazonDefault.offers); useEffect(() => { const fetchData = async () => { const url = "https://getdata.com/abc"; const result = await axios(url); setData(result.data.payload); }; fetchData(); }, []); return ( <AmazonContext.Provider value={{ data, current_product, setProduct, offers, setOffers }} > {children} </AmazonContext.Provider> ); } How i can Solve this Error const main_image = typeof current_product === 'string' ? data[current_product as keyof typeof data].main_image : null; I have also try chatGPT and google bard keyword typescript next.js reactjs javascript json
How can I resolve TS7015 in my TypeScript/Next.js/React project when using an index expression that is not of type number?
You can't access the item by a property of the object using the square bracket syntax. Assuming that current_product is the name of the product (might be id, but I can't see the Product type), you need to do the following: const main_image = data !== null ? data.find(product => product === current_product).main_image : null;
76384888
76385463
I'm trying to create a button whose change background color (to green) when it is cliked. But when it is cliked againg the button returns to the original background color (from orange). var btn1 = document.getElementById("btn-1") if (btn1.style.backgroundColor = "orange") { btn1.addEventListener("click", function () { btn1.style.backgroundColor = "green" }) } else {btn1.addEventListener("click", function () { btn1.style.backgroundColor = "orange" }) } Could you help me? Thx! I'm trying to create a button whose change background color (to green) when it is cliked. But when it is cliked againg the button returns to the original background color (from orange).
Change Button's color twice when it's clicked
let button = document.getElementById("button"); button.style.backgroundColor = "orange"; button.addEventListener("click", function () { if(button.style.backgroundColor == "orange"){ button.style.backgroundColor = "green"; } else button.style.backgroundColor = "orange"; }) <button id="button">test</button> how i understand you: you can set starter color of button to orange; and then add EventListener to button with this logic: -if the color of the button is orange - change to green or if the color is not orange - change to orange
76383370
76383691
I've been learning Vue 3 for the past month or so and have gotten quite far but I can't fix this no matter what I've tried. I know I'm losing reactivity but I can't figure out how and it's driving me nuts. I am using the Composition API and script setup with a simple Pinia store. I created a github repo for it here: https://github.com/thammer67/vue3-reactivity-problem I have a view (ProjectsView.vue) of project elements that loops through a pinia store array of projects using v-for and passing the array object as a prop. ProjectsView.vue uses a hidden form component (ProjectForm.vue) that I use for adding new projects. Each project in the loop is another component (ProjectItem.vue) with a click handler to a route that loads ProjectDetail.vue. ProjectDetail.vue has a click handler that also uses ProjectForm.vue for editing the item. Everything works great. I can add new projects, edit projects but when I edit a project the pinia store updates (I can see this in the Vue Dev tools) but the UI doesn't update untill I go back to the project list. I need to update the value in ProjectDetail.vue after saving. Here are the pertinent files. ProjectDetail.vue: <script setup> import { useProjectStore } from '../stores/ProjectStore' import { useRoute } from 'vue-router' import { ref } from 'vue' import ProjectForm from '@/components/Form/ProjectForm.vue' const projectStore = useProjectStore() const route = useRoute() const id = route.params.id const project = projectStore.getProjectById(id) const showEditProject = ref(false) const editing = ref(false) const editProject = (id)=> { editing.value = id showEditProject.value = true } </script> <template> <div class="main"> <div v-if="project" :project="project"> <h2>Project Details</h2> <div> <div class="project-name">{{ project.project }}</div> </div> <div style="margin-top: 1em"> <button type="button" @click="editProject(project.id)">Edit</button> </div> <ProjectForm @hideForm="showEditProject=false" :project="project" :editing="editing" :showAddEntry="showEditProject" /> </div> </div> </template> ProjectForm.vue: <script setup> import { ref, toRef, reactive } from "vue" import { useProjectStore } from '@/stores/ProjectStore.js' import Input from './Input.vue' const projectStore = useProjectStore() const showAddType = ref(false) //Capture 'showAddEntry' prop from parent component const props = defineProps(['showAddEntry', 'editing', 'project']) //Copy prop values for the form const projName = toRef(props.project.project) const projId = toRef(props.project.id) //new/edited values are stored on this reactive object const formState = reactive({ invalid: false, errMsg: "" }) const saveProject = () => { formState.invalid = false if(projId.value) { console.log(`Update existing project ${projId.value}`) projectStore.updateProject({ id: projId.value, project: projName.value }) .then(()=> { console.log("save was successful!") showAddType.value = false formState.invalid = false formState.errMsg = "" emit('hideForm') }) .catch(err=>console.log("Error: ", err)) } else { console.log(`Create new project`) //New Project projectStore.createProject({ project: projName.value, }) .then(()=> { showAddType.value = false formState.invalid = false formState.errMsg = "" emit('hideForm') }) } } const hideForm = ()=> { formState.invalid = false showAddType.value=false emit('hideForm') } //Define emit event up to the parent that hides the form const emit = defineEmits(['hideForm']) </script> <template> <div class="addform" :class="{ show: props.showAddEntry }"> <h1 v-if="editing" class="title">Edit Project</h1> <h1 v-else class="title">Add New Project</h1> <div class="input-wrap" :class="{ 'input-err' : formState.invalid }"> <Input @input="projName = $event.target.value" type="text" placeholder="Enter project name" :value="projName" /> <div class="entry-submit"> <button v-if="editing" @click="saveProject">Save</button> <button v-else @click="saveProject">Create Project</button> <button @click="hideForm">Cancel</button> </div> </div> <p v-show="formState.invalid" class="err-msg">{{ formState.errMsg }}</p> </div> </template>
Vue 3 Component losing reactivity
project in ProjectDetails.vue is not aware of changes being made to it in the store. It will if you wrap it with computed() import { computed } from 'vue' const project = computed(() => projectStore.getProjectById(id))
76381667
76382386
I have go the problem in making elastic search regex work. I have a document that looks like this: {"content": "keySyAtUXpd8JxrpUH2Sd"} I have trying the following regex key[0-9A-Za-z_]{18} which perfectly matches with the string in regexer.com but when I query the request from elastic search it doesn't show any hits. Here's the request that i'm using: curl -XGET 'https://localhost:9200/_search?pretty' -H 'Content-Type: application/json' -H 'Authorization: Basic redacted' -k -d '{ "query": { "regexp": { "content": "key[0-9A-Za-z_]{18}" } } }' I have also tried the regex with .*key[0-9A-Za-z_]{18}.*, tried to escape - as \\- but it doesn't seems to be working as well.
Elastic Search Regex are not working as expected
You need to run the regexp query against the content.keyword field curl -XGET 'https://localhost:9200/_search?pretty' -H 'Content-Type: application/json' -H 'Authorization: Basic redacted' -k -d '{ "query": { "regexp": { "content.keyword": "key[0-9A-Za-z_]{18}" } } }' PS: easier to test and provide feedback with real content and real queries ;-)
76380607
76382427
I have this situation: I'm building a .net Maui smartphone sports app that grabs a list of latitude and longitude (new Location class) of a running activity and draws a line (polyline) in the map to display the route. I can grab the list of exercises from the database and I can draw a polyline in the map, the problem is that I can't do both together because I don't know how to databind the Map functionalitys in my ViewModel class. Here is my xaml code for the ExercisePage.xaml: <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="DoradSmartphone.Views.ExercisePage" xmlns:model="clr-namespace:DoradSmartphone.Models" xmlns:viewmodel="clr-namespace:DoradSmartphone.ViewModels" xmlns:maps="clr-namespace:Microsoft.Maui.Controls.Maps;assembly=Microsoft.Maui.Controls.Maps" xmlns:sensors="clr-namespace:Microsoft.Maui.Devices.Sensors;assembly=Microsoft.Maui.Essentials" x:DataType ="viewmodel:ExerciseViewModel" Title="{Binding Title}"> <Grid Padding="5" Margin="5" RowSpacing="5" ColumnSpacing="3"> <Grid.RowDefinitions> <RowDefinition Height="2*"/> <RowDefinition Height="150"/> </Grid.RowDefinitions> <maps:Map Grid.Row="0" x:Name="routeMap" VerticalOptions="CenterAndExpand" Grid.ColumnSpan="3" HeightRequest="400" IsZoomEnabled="False" IsEnabled="False"> <x:Arguments> <MapSpan> <x:Arguments> <sensors:Location> <x:Arguments> <x:Double>38.744418137669875</x:Double> <x:Double>-9.128544160596851</x:Double> </x:Arguments> </sensors:Location> <x:Double>0.7</x:Double> <x:Double>0.7</x:Double> </x:Arguments> </MapSpan> </x:Arguments> </maps:Map> <CarouselView ItemsSource="{Binding Exercises}" Grid.Row="1" PeekAreaInsets="100"> <CarouselView.ItemTemplate> <DataTemplate x:DataType="model:Exercise"> <Frame HeightRequest="90" Margin="5"> <Frame.GestureRecognizers> <TapGestureRecognizer Command="{Binding Source={RelativeSource AncestorType={x:Type viewmodel:ExerciseViewModel}}, Path=ExerciseDetailsCommand} " CommandParameter="{Binding .}"></TapGestureRecognizer> </Frame.GestureRecognizers> <HorizontalStackLayout Padding="10" Spacing="5" > <Label Text="{Binding Id}"></Label> <Label Text="{Binding Date}"></Label> </HorizontalStackLayout> </Frame> </DataTemplate> </CarouselView.ItemTemplate> </CarouselView> </Grid> </ContentPage> As you can see I have my map name declared as routeMap and the first location just to start in somewhere. I also has my model and viewmodel declared for DataBinding of the exercise list in the CarouselView. The tap feature works fine and take me to a new view called ExerciseDetailsPage. This is the code behind ExercisePage.xaml.cs using DoradSmartphone.Models; using DoradSmartphone.ViewModels; using Microsoft.Maui.Controls.Maps; using Microsoft.Maui.Maps; namespace DoradSmartphone.Views; public partial class ExercisePage : ContentPage { public ExercisePage(ExerciseViewModel exerciseViewModel) { InitializeComponent(); BindingContext = exerciseViewModel; } private void OnTapGestureRouteUpdate(object sender, EventArgs e) { var route = new Polyline { StrokeColor = Colors.Red, StrokeWidth = 12, Geopath = { new Location(38.70061856336034 , -8.957381918676203 ), new Location(38.70671683905933 , -8.945225024701308 ), new Location(38.701985630081595, -8.944503277546072 ), new Location(38.701872978433386, -8.940750192338834 ), new Location(38.71054663609023 , -8.939162348597312 ), new Location(38.717755109243214, -8.942193686649311 ), new Location(38.7435419727561 , -8.928480490699792 ), new Location(38.78327379379296 , -8.880556478454272 ), new Location(38.925473761602376, -8.881999972299806 ), new Location(38.93692729913667 , -8.869585920414709 ), new Location(38.93493556584553 , -8.86536198145887 ) } }; routeMap.MoveToRegion( MapSpan.FromCenterAndRadius( new Location(38.93479161472441, -8.865352563545757), Distance.FromMiles(1))); // Add the polyline to the map routeMap.MapElements.Add(route); } } If I change the actual tap functionality to this tap event, I can drawn any line and other stuffs with in the Map because I can read the map name defined in the xaml code. But in this codebehind class I can't reach my ViewModel, Services or Model class. This is my ExerciseViewModel.cs class: using CommunityToolkit.Mvvm.ComponentModel; using CommunityToolkit.Mvvm.Input; using DoradSmartphone; using DoradSmartphone.Models; using DoradSmartphone.Services; using DoradSmartphone.Views; using Microsoft.Maui.Controls.Maps; using Microsoft.Maui.Maps; using System.Collections.ObjectModel; namespace DoradSmartphone.ViewModels { public partial class ExerciseViewModel : BaseViewModel { private readonly ExerciseService exerciseService; public ObservableCollection<Exercise> Exercises { get; private set; } = new(); public ExerciseViewModel(ExerciseService exerciseService) { Title = "Training Routes"; this.exerciseService = exerciseService; _ = GetExerciseList(); } [ObservableProperty] bool isRefreshing; async Task GetExerciseList() { if (IsLoading) return; try { IsLoading = true; if (Exercises.Any()) Exercises.Clear(); var exercices = exerciseService.GetExercises(); foreach (var exercise in exercices) Exercises.Add(exercise); } catch(Exception ex) { Console.WriteLine(ex.ToString()); await Shell.Current.DisplayAlert("Error", "Failed to retrieve the exercice list", "Ok"); } finally { IsLoading = false; isRefreshing= false; } } [RelayCommand] async Task ExerciseDetails(Exercise exercise) { if(exercise == null) return; var routes = GetLocations(exercise.Id); DrawRoutes(routes); } public List<Location> GetLocations(int exerciseId) { if (exerciseId == 1) { return new List<Location> { new Location(35.6823324582143, 139.7620853729577), new Location(35.679263477092704, 139.75773939496295), new Location(35.68748054650018, 139.761486207315), new Location(35.690745005825136, 139.7560362984393), new Location(35.68966608916097, 139.75147199952355), new Location(35.68427128680411, 139.7442168083328) }; } else if (exerciseId == 2) { return new List<Location> { new Location(35.6823324582143, 139.7620853729577), new Location(35.679263477092704, 139.75773939496295), new Location(35.68748054650018, 139.761486207315), new Location(35.690745005825136, 139.7560362984393), new Location(35.68966608916097, 139.75147199952355), new Location(35.68427128680411, 139.7442168083328) }; } else { return new List<Location> { new Location(35.6823324582143, 139.7620853729577), new Location(35.679263477092704, 139.75773939496295), new Location(35.68748054650018, 139.761486207315), new Location(35.690745005825136, 139.7560362984393), new Location(35.68966608916097, 139.75147199952355), new Location(35.68427128680411, 139.7442168083328) }; } } private void DrawRoutes(List<Location> routes) { var polylines = new Polyline { StrokeColor = Colors.Red, StrokeWidth = 12, }; foreach(var route in routes) { polylines.Geopath.Add(route); } routeMap.MoveToRegion( MapSpan.FromCenterAndRadius( routes.FirstOrDefault(), Distance.FromMiles(1))); // Add the polyline to the map routeMap.MapElements.Add(polylines); } } } This class inherits the BaseViewModel that inherits ObservableObject and has some common properties for all others classes. In the ExerciseViewModel I have my RelayCommand related to the tap feature that grabs the exercise object and add the route, but I cant access the routeMap object. I've tried also to declare a Map class in my viewmodel class, but I get the error all the time that I can't create a instance of a static class. This is my MauiProgram.cs just in case there's something wrong: using DoradSmartphone.Data; using DoradSmartphone.Services; using DoradSmartphone.ViewModels; using DoradSmartphone.Views; namespace DoradSmartphone; public static class MauiProgram { public static MauiApp CreateMauiApp() { var builder = MauiApp.CreateBuilder(); builder .UseMauiApp<App>() .UseMauiMaps() .ConfigureFonts(fonts => { fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular"); fonts.AddFont("OpenSans-Semibold.ttf", "OpenSansSemibold"); }); builder.Services.AddSingleton<DatabaseConn>(); builder.Services.AddScoped<IRepository, DatabaseConn>(); builder.Services.AddSingleton<MainPage>(); builder.Services.AddSingleton<UserPage>(); builder.Services.AddSingleton<LoginPage>(); builder.Services.AddSingleton<LoadingPage>(); builder.Services.AddSingleton<ExercisePage>(); builder.Services.AddSingleton<DashboardPage>(); builder.Services.AddSingleton<ExerciseDetailsPage>(); builder.Services.AddSingleton<UserService>(); builder.Services.AddSingleton<LoginService>(); builder.Services.AddSingleton<ExerciseService>(); builder.Services.AddSingleton<DashboardService>(); builder.Services.AddSingleton<UserViewModel>(); builder.Services.AddSingleton<LoginViewModel>(); builder.Services.AddSingleton<LoadingViewModel>(); builder.Services.AddSingleton<ExerciseViewModel>(); builder.Services.AddSingleton<DashboardViewModel>(); builder.Services.AddTransient<ExerciseDetailsViewModel>(); return builder.Build(); } } Thank you in advance!
.Net Maui Google Maps Polyline drawning route feature
unfortunately, MapElements is not a bindable property. However, you can work around that in a couple of ways for example, create a public method in your VM that returns the route data public Polyline GetRouteData() { var polylines = new Polyline { StrokeColor = Colors.Red, StrokeWidth = 12, }; foreach(var route in routes) { polylines.Geopath.Add(route); } return polylines; } then in your code behind, first create a class reference to the VM ExerciseViewModel ViewModel; public ExercisePage(ExerciseViewModel exerciseViewModel) { InitializeComponent(); BindingContext = ViewModel = exerciseViewModel; } then your code behind can get the data from the VM that it needs to update the map routeMap.MapElements.Add(ViewModel.GetRouteData());
76385151
76385493
In column X are those variables that will have values in each column j, in this case only U1, X4 and U2 have values, the rest of the variables belonging to the list ['B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'] will all have their values 0 #example matrix new_matrix = [[ 'C', 'X', 'B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'], [ 0.0, 'U1', 8, 2.0, 1.0, -1.0, 0, 0, 1.0, 0], ['+M', 'X4', 2, 1.0, 1.0, 0, 1.0, 0, 0, 0], ['+M', 'U2', 8, 1.0, 2.0, 0, 0, -1.0, 0, 1.0]] variables = ['X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2', 'B'] #select the first row (only variables) variables_j_col_values = [[variables.pop(variables.index('B'))] + variables, []] # --> ['B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'] The problem with this is that I need create the following matrix of values of the variables (without using libraries) where I would have the following: variables_j_col_values = [['B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'], [ 0 , 0, 0, 0, 2, 0, 8, 8], #column new_matrix[][2] [ 0 , 0, 0, 0, 1.0, 0, 2.0, 1.0], #column new_matrix[][3] [ 0 , 0, 0, 0, 1.0, 0, 1.0, 2.0], #column new_matrix[][4] [ 0 , 0, 0, 0, 0, 0, -1.0, 0], #column new_matrix[][5] [ 0 , 0, 0, 0, 1.0, 0, 0, 0], #column new_matrix[][6] [ 0 , 0, 0, 0, 0, 0, 0, -1.0], #column new_matrix[][7] [ 0 , 0, 0, 0, 0, 0, 1.0, 0], #column new_matrix[][8] [ 0 , 0, 0, 0, 0, 0, 0, 1.0], ] #column new_matrix[][9] After create the variables_j_col_values, go replacing the values of the rows (except for row 0 of the variables_j_col_values array because it is a header) in the string inside funcion_obj_z The logic would be to use a loop that goes through the rows, and does a .replace(new_matrix[][n], this_element) funcion_obj_z = 'Z = 3 * X1 + 2 * X2 + 0 * X3 + 0 * X4 + 0 * X5 + M * U1 + M * U2' In this way, using said string as an expression, it would obtain these prints in the console if it printed the value of j_func in each j iteration. These would be the desired correct output: #for loop, print the j string replacement the values in the string j_func = 'Z = 3 * 0 + 2 * 0 + 0 * 0 + 0 * 2 + 0 * 0 + M * 8 + M * 8' #iteration 1 j_func = 'Z = 3 * 0 + 2 * 0 + 0 * 0 + 0 * 1.0 + 0 * 0 + M * 2.0 + M * 1.0' #iteration 2 j_func = 'Z = 3 * 0 + 2 * 0 + 0 * 0 + 0 * 1.0 + 0 * 0 + M * 1.0 + M * 2.0' #iteration 3 j_func = 'Z = 3 * 0 + 2 * 0 + 0 * 0 + 0 * 0 + 0 * 0 + M * -1.0 + M * 0' #iteration 4 j_func = 'Z = 3 * 0 + 2 * 0 + 0 * 0 + 0 * 1.0 + 0 * 0 + M * 0 + M * 0' #iteration 5 j_func = 'Z = 3 * 0 + 2 * 0 + 0 * 0 + 0 * 0 + 0 * 0 + M * 0 + M * -1.0' #iteration 6 j_func = 'Z = 3 * 0 + 2 * 0 + 0 * 0 + 0 * 0 + 0 * 0 + M * 1.0 + M * 0' #iteration 7 j_func = 'Z = 3 * 0 + 2 * 0 + 0 * 0 + 0 * 0 + 0 * 0 + M * 0 + M * 1.0' #iteration 8
Replacing values in a string expression based on a matrix and iterating over columns
Albeit an ugly solution, this should give you the transformation you need: new_matrix = [[ 'C', 'X', 'B', 'X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2'], [ 0.0, 'U1', 8, 2.0, 1.0, -1.0, 0, 0, 1.0, 0], ['+M', 'X4', 2, 1.0, 1.0, 0, 1.0, 0, 0, 0], ['+M', 'U2', 8, 1.0, 2.0, 0, 0, -1.0, 0, 1.0]] variables = ['X1', 'X2', 'X3', 'X4', 'X5', 'U1', 'U2', 'B'] # create empty matrix variables_j_col_values = [[0 for _ in range(len(variables))] for _ in range(len(new_matrix[0])-1)] # replace first row with sorted variables based on new_matrix headers variables_j_col_values[0] = sorted(variables, key=lambda x: new_matrix[0].index(x)) # loop over all value rows for row in new_matrix[1:] # get correct column in variables_j_col_values based col = variables_j_col_values[0].index(row[1]) # zip the values and rows and update accordingly for val, target in zip(row[2:], variables_j_col_values[1:]): target[col] = val
76382456
76383719
I wanted to write a query that would allow me to calculate deviations by the number of created orders. Task: the query should look back 7 days and based on this data build a minimum allowable threshold (MAT). If the number of orders for a minimum period of time (5 minutes) is less than MAT, then an alert will be generated. Features: The number of orders directly affects the time of day and seasonality. Having searched the Internet, I found information about so-called Poisson distribution, and tried to apply it to the problem, but it didn't work. In prometheus there are such functions as day_of_week(), avg_over_time() and stddev_over_time. From what I was able to do: The difference between the number of orders in the last 5 min. sum(delta(my_search_counter{service_name="car.book.v1"}[5m]) Five-minute average time variation over the last 30 minutes with a resolution of 5 minutes avg_over_time(sum(delta(my_search_counter{service_name="car.book.v1"}[5m]))[1w:5m]) Mean square deviation: stddev_over_time(sum(delta(my_search_counter{service_name="car.book.v1"}[5m]))[1w:5m]) This is where I'm stuck and can't figure out how to build a proper query. Maybe there is another way, simpler, but I haven't found it. I tried to combine these queries with each other using addition, subtraction and division.
How can i create a deviations query?
I'm not sure what statistics is this, and how adequate this is as a threshold, but here is query you described. sum(increase(my_search_counter{service_name="car.book.v1"}[5m])) < sum(increase(my_search_counter{service_name="car.book.v1"}[5m] offset 1w)) - stddev_over_time(sum(increase(my_search_counter{service_name="car.book.v1"}[5m] offset 1w))[1d:5m]) It returns value if number of oreder over last 5 minutes is less then number of orders over same 5 minutes 1 week ago minus standart deviation of orders number over 24 hours presiding current moment 1 week ago. You might need to play a little with multiplier for stddev part, to get a reasonable percent of alerts.
76383416
76383740
In a Java 17 project, I have a collection of objects with a String field propGroupName representing the group this object belongs to and a Boolean field propValActive representing whether this object is flagged as active internally. I want to aggregate these objects by the string field into a Map<String, Boolean> with the key being the String field and the Boolean being false if all the booleans in the group are false and true if 1 or more of the booleans in the group are false. I have a working implementation with a simple for loop, but I want to know if there is a way to do this grouping through the Java Stream API, preferably in a way that short circuits? The goal is that I want to know of every group whether there are any objects in that group flagged as active. I currently have this implementation which doesn't use the Streams API and doesn't short circuit: public Map<String, Boolean> determineActiveGroups( HashMap<String, PropertyValueDefinitionGroupView> standardPvdgMap) { Map<String, Boolean> activeGroupsMap = new HashMap<String, Boolean>(); for (PropertyValueDefinitionGroupView pvdgView : standardPvdgMap.values()) { if(pvdgView.getPropGroupOid() == null) { continue; } activeGroupsMap.putIfAbsent(pvdgView.getPropGroupName(), false); if(pvdgView.getPropValActive()) { activeGroupsMap.put(pvdgView.getPropGroupName(), true); } } return activeGroupsMap; } I have a different bit of code somewhere else that does something similar, but it retains the lists, and I managed to adapt something similar for what I need but I don't know what predicate I can use to finish it with. I assume it's going to use anyMatch, but I have no idea how to integrate it: Map<String, Boolean> activeGroups = standardPvdgMap.values().stream() .collect(Collectors.groupingBy(PropertyValueDefinitionGroupView::getPropGroupName, ???????));
Group list of objects into Map with value being true if any object in group has field set as true using Java stream API
groupingBy is such a powerful collector : public Map<String, Boolean> determineActiveGroups(Map<String, PropertyValueDefinitionGroupView> standardPvdgMap) { return standardPvdgMap.values() .stream() .filter(pvdgView -> pvdgView.getPropGroupOid() != null) .collect(Collectors.groupingBy( PropertyValueDefinitionGroupView::getPropGroupName, Collectors.mapping( PropertyValueDefinitionGroupView::getPropValActive, Collectors.reducing(false, (a, b) -> a || b)) )); } The trick is knowing that you can apply further collectors on the downstream. In this case I map to the flag, and then reduce the flags using the logical or.
76381701
76382431
I need to have a MS Word Macro check upon file exit or file close, that certain specified text fields (legacy form fields, not content control) are empty. I have used some code that is a pretty intrusive warning box. But its also contingent on the user selecting that field then the macro pops up a warning box either upon entry or exit, as specified in the form field properties menu. I have several fields,"Text1", "text2", then text7 thru 11. Trouble is, the user MUST select a field to get this code to work, on top of that, the warning box basically sends them into a death loop before they can even close the file. I also have to make a new module for each of field with the code below. Perhaps the best solution here is a macro that runs on close and/or exit of the file, which says "Hey you forgot to fill out these fields, they are 'mandatory' so go back and do that please, thanks!" What do you all think? Sub MustFillIn3() If ActiveDocument.FormFields("Text2").Result = "" Then Do sInFld = InputBox("Request date required, please fill in below.") Loop While sInFld = "" ActiveDocument.FormFields("Text2").Result = sInFld End If End Sub
MS Word VBA to check for empty text form fields upon file close/exit
Yes, just write the check code in the event handler procedure Document_Close in ThisDocument object, like this Sub Document_Close() Dim ff As FormField, sInFld As String, msgShown As Boolean, d As Document, i As Byte 'Dim ffNameDict As New Scripting.Dictionary, ffNameSpecCln As New VBA.Collection Dim ffNameDict As Object, ffNameSpecCln As New VBA.Collection Dim arr(7) As String, j As Byte arr(0) = "location": arr(1) = "request_date": arr(2) = "site" arr(3) = "UPC": arr(4) = "Current_LOA": arr(5) = "Req_LOA" arr(6) = "You Lost this One!!" For i = 1 To 11 Select Case i Case 1, 2, 7, 8, 9, 10, 11 '"Text1", "text2", then text7 thru 11. 'to a specific name list? 'ffNameSpecCln.Add "Specific Name HERE " & i, "Text" & i ffNameSpecCln.Add arr(j), "Text" & i j = j + 1 End Select Next i Set ffNameDict = CreateObject("Scripting.Dictionary") Set d = ActiveDocument For i = 1 To 11 Select Case i Case 1, 2, 7, 8, 9, 10, 11 '"Text1", "text2", then text7 thru 11. 'ffNameDict("Text" & i) = "Text" & i ffNameDict("Text" & i) = ffNameSpecCln.Item("Text" & i) End Select Next i For Each ff In d.FormFields If ff.Result = "" And ffNameDict.Exists(ff.Name) Then If Not msgShown Then MsgBox "Hey you forgot to fill out these fields, they are 'mandatory' so go back and do that please, thanks!", vbExclamation msgShown = True End If Do ' sInFld = InputBox("Request date required, please fill in below." + vbCr + vbCr + _ "@" + ff.Name + " is the current text fields to fill in !") sInFld = InputBox("Request date required, please fill in below." + vbCr + vbCr + _ "@" + ffNameDict(ff.Name) + " is the current text fields to fill in !") Loop While sInFld = "" ff.Result = sInFld End If Next ff d.Save End Sub note: The Private modifier in this image should be removed in order to be called in the appWord_DocumentBeforeSave event handler (code above already set) This check sub is triggered when the current document is closed and is not related to whether ff has focus or not (ie. the user Doesn't MUST select a field ). Option Explicit Public WithEvents appWord As Word.Application Private Sub appWord_DocumentBeforeSave(ByVal Doc As Document, SaveAsUI As Boolean, Cancel As Boolean) ThisDocument.Document_Close End Sub You have to run this sub to Register Event_Handler to Word Application. Option Explicit 'https://learn.microsoft.com/en-us/office/vba/word/concepts/objects-properties-methods/using-events-with-the-application-object-word Public X As New app Public Sub Register_Event_Handler() Set X.appWord = Word.Application End Sub "物件類別模組" = class modules "模組" = modules "表單" = user form "Microsof Word 物件" = Microsof Word object As for the details, you should adjust them yourself. Try to understand the code I have given you to simulate it. Come back to StackOverflow and ask a new question when you encounter difficulties and problems in the implementation. I've used the text field to test: Is this yours? Before closing the document check if it has been modified Option Explicit Public WithEvents appWord As Word.Application Private Sub appWord_DocumentBeforeClose(ByVal Doc As Document, Cancel As Boolean) If Not Doc.Saved Then If MsgBox("Do you want to save?", vbOKCancel + vbQuestion) = vbOK Then Doc.Save Else Doc.Close wdDoNotSaveChanges End If End If End Sub Private Sub appWord_DocumentBeforeSave(ByVal Doc As Document, SaveAsUI As Boolean, Cancel As Boolean) MS_Word_VBA_to_check_for_empty_text_form_fields_upon_file_close_exit End Sub Comment out the event handler Document_Close code and Registering an event handler when a document is opened: Option Explicit rem now can be Private, because there is no other place to call this procedure Private Sub Document_Close() 'MS_Word_VBA_to_check_for_empty_text_form_fields_upon_file_close_exit End Sub Private Sub Document_Open() Register_Event_Handler ' See previous code End Sub Extract the code to become a separate checker procedure or method: Sub MS_Word_VBA_to_check_for_empty_text_form_fields_upon_file_close_exit() Dim ff As FormField, sInFld As String, msgShown As Boolean, d As Document, i As Byte 'Dim ffNameDict As New Scripting.Dictionary, ffNameSpecCln As New VBA.Collection Dim ffNameDict As Object, ffNameSpecCln As New VBA.Collection Dim arr(7) As String, j As Byte arr(0) = "location": arr(1) = "request_date": arr(2) = "site" arr(3) = "UPC": arr(4) = "Current_LOA": arr(5) = "Req_LOA" arr(6) = "You Lost this One!!" For i = 1 To 11 Select Case i Case 1, 2, 7, 8, 9, 10, 11 '"Text1", "text2", then text7 thru 11. 'to a specific name list? 'ffNameSpecCln.Add "Specific Name HERE " & i, "Text" & i ffNameSpecCln.Add arr(j), "Text" & i j = j + 1 End Select Next i Set ffNameDict = CreateObject("Scripting.Dictionary") Set d = ActiveDocument For i = 1 To 11 Select Case i Case 1, 2, 7, 8, 9, 10, 11 '"Text1", "text2", then text7 thru 11. 'ffNameDict("Text" & i) = "Text" & i ffNameDict("Text" & i) = ffNameSpecCln.Item("Text" & i) End Select Next i For Each ff In d.FormFields If ff.Result = "" And ffNameDict.Exists(ff.Name) Then If Not msgShown Then MsgBox "Hey you forgot to fill out these fields, they are 'mandatory' so go back and do that please, thanks!", vbExclamation msgShown = True End If Do ' sInFld = InputBox("Request date required, please fill in below." + vbCr + vbCr + _ "@" + ff.Name + " is the current text fields to fill in !") sInFld = InputBox("Request date required, please fill in below." + vbCr + vbCr + _ "@" + ffNameDict(ff.Name) + " is the current text fields to fill in !") Loop While sInFld = "" ff.Result = sInFld End If Next ff d.Save End Sub
76383035
76383754
I am facing an issue with Typescript generics. I have a function which must be called with a type and some attributes depending on this type. Typescript does not manage to infer the type of attributes when inside an if guarding the type to TYPES.ME: enum TYPES { ME = 'me', YOU = 'you' } type Attributes<T extends TYPES> = T extends TYPES.ME ? {keys: true} : {hat: true} const func = <T extends TYPES>(type: T, attr: Attributes<T>) => { if(type === TYPES.ME) { attr.keys // error Property 'keys' does not exist on type '{ keys: true; } | { hat: true; }' } } Playground link I would like to have your opinions on this matter and see if there is a nice workaround. Cheers!
Type inference on function parameters with nested generics?
Currently, TypeScript is unable to re-constrain generic type parameters as a result of control flow analysis. Inside the body of func(type, attr), you check that type === Types.ME. This can narrow the type of type from T to something like T & Types.ME. But it cannot do anything to T itself. The type parameter T stubbornly stays the same; it is not constrained to Types.ME. And thus the compiler cannot conclude that attr is of type Attributes<Types.ME>. And it is technically correct for the compiler to refuse to change T. That's because, while individual values like type can only be one thing at a time, a type argument like T can be a union. Indeed, you can call func() with a T equal to the full Types.ME | Types.YOU union, like so: func( Math.random() < 0.999 ? Types.ME : Types.YOU, { hat: true } ) // compiles without error If you inspect that, you'll see that T is inferred as Types (the full union of Types.ME | Types.YOU, and therefore attr is allowed to be {hat: true} even in the 99.9% likely event that type is Types.ME. There is a longstanding open feature request at microsoft/TypeScript#27808 which asks for a way to say "T will be exactly one of Types.ME or Types.YOU; it cannot be a union". And then, maybe inside the function body, checking type === Types.ME would allow T itself to be constrained to Types.ME, and things would work as expected. And presumably the call with Math.random() < 0.999 would be rejected. But for now it's not part of the language. You might consider taking the approach where instead of having func be generic, you make it similar to an overloaded function, where it has one call signature per member of Types. You can write that as a function with a rest parameter whose type is a discriminated union of tuple types, and the compiler will treat it as such inside the function body. Perhaps like this: type FuncArg = [type: Types.ME, attr: { keys: true }] | [type: Types.YOU, attr: { hat: true }]; const func: (...args: FuncArg) => void = (type, attr) => { if (type === Types.ME) { console.log(attr.keys) // this works } } And now you can't make the invalid call: func( Math.random() < 0.999 ? Types.ME : Types.YOU, { hat: true } ) // error, type 'Types.ME' is not assignable to type 'Types.YOU' Everything works because now type and attr are bound together as desired in FuncArg; it's like imagining T were constrained to be just one member of Types at a time, and walking through the possibilities. Note that FuncArg could, if necessary, be computed from the Attributes type given in the question, or from another mapping interface, but that is out of scope for the question as asked. Playground link to code
76383892
76385502
I'm developing a car rental automation system in SQL, and I'm seeing unwanted data in my table that I've received and constantly updated. The query I need to write is as follows: 'Write a query that retrieves information about the last rented car for customers who have rented cars with the feature of a sunroof at least once.' I've been struggling with it for hours and couldn't solve it. Can you help me? select * from car a where exists( select * from car a2 inner join customer m on m.customer_id=a2.customer_id inner join rent k on k.customer_id=m.customer_id inner join car_rent ak on ak.rent_id=k.rent_id inner join package_package_option pk on pk.package_id=a2.package_id where pk.option_id=2 and a.rent_sell=1 group by ak.date having ak.date = max(ak.date)
SQL query for last rented car with sunroof feature in car rental system
The solution has two steps: Identify all customers who have ever rented a car with a sunroof, and For each such customer, look up the latest rental for each such customer. The first step is pretty straight forward - Filter the rentals for sunroof and select distinct customer IDs. The second step can be done a couple of ways. One is to feed the customer IDs into a CROSS APPLY (SELECT TOP 1 ... ORDER BY RentalDate DESC) construct to select the latest rental for each selected customer one at a time. Another is to lookup all rentals for the selected customers, assign sequence numbers using the ROW_NUMBER() window function OVER(... ORDER BY RentalDate DESC), and then filtering for row-number = 1. Something like (pseudocode): SELECT * FROM ( SELECT DISTINCT customer_id FROM rentals WHERE has-a-sunroof AND is-a-rental ) C CROSS APPLY ( SELECT TOP 1 rental-info FROM rentals R WHERE R.customer_id = C.customer_id AND is-a-rental ORDER BY R.Rentaldate DESC ) R or SELECT * FROM ( SELECT rental-info, ROW_NUMBER() OVER(PARTITION BY R.customer_id ORDER BY R.Rentaldate DESC) RowNum FROM rentals R WHERE R.customer_id IN ( SELECT DISTINCT customer_id FROM rentals WHERE has-a-sunroof AND is-a-rental ) AND is-a-rental ) A WHERE A.RowNum = 1 You might try both to compare performance with large data sets. I recommend also ensuring that you have an index on the rental that includes both customer_id and rental date for best performance.
76382019
76382608
I am trying to read a value from a sensor, BMP280 over SPI on a Raspberry Pi Pico. But I am getting an unexpected value. I created a new repo based on the rp2040-project-template and modified it to add SPI functionality. I added these imports: use embedded_hal::prelude::_embedded_hal_spi_FullDuplex; use rp_pico::hal::spi; use rp_pico::hal::gpio; use fugit::RateExtU32; Then I setup SPI in the bottom of main function: let _spi_sclk = pins.gpio2.into_mode::<gpio::FunctionSpi>(); let _spi_mosi = pins.gpio3.into_mode::<gpio::FunctionSpi>(); let _spi_miso = pins.gpio4.into_mode::<gpio::FunctionSpi>(); let mut spi_cs = pins.gpio5.into_push_pull_output_in_state(PinState::Low); // initial pull down, for SPI let spi = spi::Spi::<_, _, 8>::new(pac.SPI0); let mut spi = spi.init( &mut pac.RESETS, clocks.peripheral_clock.freq(), 10.MHz(), // bmp280 has 10MHz as maximum &embedded_hal::spi::MODE_0, ); spi_cs.set_high().unwrap(); // pull up, set as inactive after init delay.delay_ms(200); // some delay for testing Then I try to read the ID registry spi_cs.set_low().unwrap(); let res_w = spi.send(0xd0 as u8); // 0xd0 is address for ID, with msb 1 let res_r = spi.read(); spi_cs.set_high().unwrap(); // check results match res_w { Ok(_) => info!("write worked"), Err(_) => info!("failed to write") } match res_r { Ok(v) => info!("read value from SPI: {}", v), Err(_) => info!("failed to read SPI") } With this code, the SPI read fails. Why is that? Perhaps it is necessary to set a mode on the sensor, before reading the ID. I can add this code above the read, to set forced mode. spi_cs.set_low().unwrap(); spi.send(0xf4-128 as u8).expect("failed to send first byte"); // registry 0xf4 with msb 0 spi.send(0x1 as u8).expect("failed to send second byte"); spi_cs.set_high().unwrap(); Now the read of ID registry works, but I get value 255 and not the expected 0x58. What am I doing wrong? I have also tried with transfer using this code: let mut data: [u8; 2] = [0xd0, 0x0]; let transfer_success = spi.transfer(&mut data); match transfer_success { Ok(v) => info!("read data {}", v), Err(_) => info!("failed to read") } But I read the values as [255, 255] with this code, not the expected 0x58.
Read value from SPI on Raspberry Pi Pico using Rust
read() is probably not the function you want to use here; it doesn't acutally perform any bus action but only gives you the byte that was read during the last send(). The function you actually want to use is transfer(). On a full-duplex SPI bus, a "read" action is always also a "write" action, and transfer performs both. Be aware that if you only want to read, you need to write the same amount of zeros, because only the bus master can provide the clock to do anything. So if you want to write 0xd0, followed by reading a single byte, you need to transfer() the values [0xd0, 0x00]. The same array that you use to put your sent data into transfer() will then contain the received data; most likely [0x00, <data>] (or [0xff, <data>], not sure. Probably 0xff, as you already mentioned that you read a 255). The implementation of transfer shows how read() is actually supposed to be used: fn transfer<'w>(&mut self, words: &'w mut [W]) -> Result<&'w [W], S::Error> { for word in words.iter_mut() { block!(self.send(word.clone()))?; *word = block!(self.read())?; } Ok(words) } Note the block!() here - in embedded, asynchronous calls usually return an error indicating that the operation would block, until it is finished. The block!() macro converts an asynchronous call to a blocking call, which is most likely where your error comes from. Either way, I would recommend deriving your code from the official example, those are usually pretty good at demonstrating the intended way an object should be used.
76383668
76383803
I have a simple HTML snippet like this: <p align= "left"> <FONT size=3><STRONG> Asset: &nbsp; </STRONG></FONT><STRONG><FONT color="blue" size=2> something something here </FONT></STRONG><br> </p> The font color does not seem to work but the font size does change. Also, I do understand that explicit font description is expiring and we should be using the CSS styling. If you could suggest that option,okay to use that option as well.
Font Color does not update
Using HTML elements and classes is the better option although I do admit some might seem overly complicated. For my answer, first I give the paragraph a class. Inside of it, I'm aligning the text left and bolding EVERYTHING in it. Next, spans are by default inline (meaning they show inline with the text, but you don't have control over the spacing). You could also use a div. So every span in the paragraph tag, I'm setting to inline block so I can control the spacing around it while keeping the text inline. Next each part of the content gets wrapped in its own span. Since earlier I already set all content to bold, I'm only going to give the title span a margin to the right to space it out and change the font size. Next for the rest I wrapped that content in another span and changed the color to blue (hex is #Red Green Blue) and I changed the font size also. .updated{ text-align:left; font-weight:bold; } .updated span{ display:inline-block; } .updated .title{ margin-right:5px; font-size:18px; } .updated .content{ color:#0000FF; font-size:13px; } Originial: <p align= "left"> <FONT size=3><STRONG> Asset: &nbsp; </STRONG></FONT><STRONG><FONT color="blue" size=2> something something here </FONT></STRONG><br> </p> Updated: <p class="updated"> <span class="title">Asset:</span> <span class="content">something something here</span> </p>
76384530
76385508
Uncaught TypeError: Cannot read properties of undefined (reading 'params') Unabel to navigate to id can anyone help? i am woriking on django as backend and react as frontend class ArticleDetail extends React.Component{ state={ article:{} } componentDidMount(){ const id = this.props.match.params.id; axios.get(`http://127.0.0.1:8000/api/${id}`) .then(res =>{ this.setState({ article:res.data }); console.log(res.data) }) } render(){ return( <Card title={this.state.article.title} > <p>{this.state.article.content }</p> </Card> ) } }``` TypeError: Cannot read properties of undefined (reading 'params') Unabel to navigate to id can anyone help? i am working on react + django. My data from server in list is showing but when i try to navigate to particular data id it shows error
TypeError: Cannot read properties of undefined (reading 'params') Django + React
This should be a frontend problem. 1-) Add the following line of code at the beginning of your class: import { useParams } from 'react-router-dom'; 2-) Then add this function above your class (copy it exactly): export function withRouter(Children){ return(props)=>{ const match = {params: useParams()}; return <Children {...props} match = {match}/> } } 3-) Next, change your class definition to this: class ArticleDetail extends Component 4-) Add the following line of code at the end of your class: export default withRouter(ArticleDetail); Ref: https://stackoverflow.com/a/75304487/11897778 If it doesn't work, please provide more details about the error, if the API request is being made or is it failing before making the API request?
76381276
76382625
When i join two tables in search handler there is a same column in both tables, i cannot access the value of left table for example if there are two tables user and volunteers they both have id column when I write a search handler like this $builder->join('users', 'volunteers.user_id', "=", "users.id") ->join('policies','volunteers.policy_id',"=","policies.id") ->where(function($q) use ($whereConditions){ $q->where('users.first_name','like','%'.$whereConditions['OR'][0]['value'].'%'); $q->orWhere('users.last_name','like','%'.$whereConditions['OR'][0]['value'].'%'); $q->orWhere('policies.name','like','%'.$whereConditions['OR'][0]['value'].'%'); $q->orWhere('volunteers.experiences','like','%'.$whereConditions['OR'][0]['value'].'%'); $q->orWhere('volunteers.medical_facility','like','%'.$whereConditions['OR'][0]['value'].'%'); }); and when i query, it will return user id as volunteer id I want the volunteer id but I always get the user id. I hope the question is clear
Getting user ID instead of volunteer ID while joining tables in Laravel GraphQL search handler
I implemented your problem and the problem happened to me but I solved it by using $builder->select('your columns').
76383738
76383823
Novice trying to simplify my jQuery, to avoid repetition I am a novice with Javascript and jQuery, but have written some code to display a tooltip on a form depending on the answer selected to a dropdown. Right now I am repeating the steps twice: Once to check the dropdown on page load, in case it has reloaded due to a submission error - in this case, if an answer has been selected to that question, it will persist, and I need the tooltip to remain. Once to check whenever the dropdown value changes. The value of the dropdown is a number, so I've used that in the div classes to show the appropriate div. This is the code which is working fine: $(document).ready(function(){ var service = ""; var otherservice = ""; // Check for select value on page load, in case it has refreshed due to a form error service = '.v' + $('select#989022_58716pi_989022_58716 option:selected').val(); otherservice = '#form-tooltips div:not('+service+')'; $('#form-tooltips div'+service).show(); $(otherservice).hide(); // Check again for select value, every time that selection changes $('select#989022_58716pi_989022_58716').on('change', function(){ service = '.v' + $('select#989022_58716pi_989022_58716 option:selected').val(); otherservice = '#form-tooltips div:not('+service+')'; $('#form-tooltips div'+service).show(); $(otherservice).hide(); }); }); //The tooltips for display $("<div id='form-tooltips'><div class='v1381962'>Tooltip 1</div><div class='v1381965'>Tooltip 2</div></div>").insertAfter(".add-tooltip-after"); What I would like to do is create a function - checkTooltip - so that I do not have to repeat those tooltip instructions the second time. I have tried the following: $(document).ready(function(){ var service = ''; var otherservice = ''; function checkTooltip({ service = '.v' + $('select#989022_58716pi_989022_58716 option:selected').val(); otherservice = '#form-tooltips div:not("+service+")'; $('#form-tooltips div'+service).show(); $(otherservice).hide(); }); checkTooltip(); $('select#989022_58716pi_989022_58716').on('change', checkTooltip()); }); $("<div id='form-tooltips'><div class='v1381962'>Tooltip 1</div><div class='v1381965'>Tooltip 2</div></div>").insertAfter(".add-tooltip-after"); However this is not working. In the Chrome console, it says Uncaught SyntaxError: Unexpected token ';' on the 5th line. I have tried removing that semicolon but then it gives me Unexpected identifier 'otherservice' instead. Am I completely misunderstanding how this works or making some kind of syntax error? Many thanks in advance to anyone who can help!
How can I simplify my jQuery code to avoid repeating instructions?
The syntax is definitely wrong, but in the function definition. It should look like this. No guarantees on whether the functionality is correct.: $(document).ready(function(){ let service = ''; let otherservice = ''; const checkTooltip = () => { service = '.v' + $('select#989022_58716pi_989022_58716 option:selected').val(); otherservice = '#form-tooltips div:not("+service+")'; $('#form-tooltips div'+service).show(); $(otherservice).hide(); } checkTooltip(); $('select#989022_58716pi_989022_58716').on('change', checkTooltip); }); Note the change in the on.change handler at the end too. Also its out of scope for your question but 989022_58716pi_989022_58716 probably could be rewritten with a more human-readable id or class.
76383689
76383855
I have a git repository created in Azure Devops. I need to restrict access to it in such a way that the repo is accessible only for TeamA. When I set Deny for all other groups and Allow only for TeamA, the permission Deny takes preference when the user belongs to both Contributors and TeamA. Would you please help me to achieve this? I cannot grant access to all contributors except for those in TeamA. I tried security settings in repository.
Azure Devops git repository permissions: how to prioritize 'Allow' over 'Deny'?
Use the Not Set permission as an implicit Deny. As you've discovered, explicit Deny takes precedence over explicit Allow.
76382065
76382676
mysql recently reported me the following error: [HY000][1366] Incorrect string value: '\xF0\x9D\x98\xBD\xF0\x9D...' for column 'name' after investigation, I found that the value with weird characters comes from a filename, which apparently contains bold characters: 4 𝘽𝘼𝙉𝘿𝙀 𝘼𝙉𝙉𝙊𝙉𝘾𝙀 - TV.mp4 Instead of changing the encoding of my database to accept such characters, i'd rather sanitize the value before inserting it, in PHP. But I have no idea which operation I should run to end with the following sanitized value : 4 BANDE ANNONCE - TV.mp4 Any help would be appreciated.
getting rid of bold characters in a filename
You can use the PHP iconv function to convert the string from one character encoding to another. In this case, you can try converting the string from UTF-8 to ASCII//TRANSLIT, which will attempt to transliterate any non-ASCII characters into their closest ASCII equivalents. Here's an example: function sanitize_string($input_string) { $sanitized_string = iconv("UTF-8", "ASCII//TRANSLIT", $input_string); return $sanitized_string; } $filename = "4 𝘽𝘼𝙉𝘿𝙀 𝘼𝙉𝙉𝙊𝙉𝘾𝙀 - TV.mp4"; $sanitized_filename = sanitize_string($filename); echo $sanitized_filename; This should output 4 BANDE ANNONCE - TV.mp4, which is the sanitized value you're looking for.
76385320
76385519
even by searching on the internet I did not find, or in all that I did not understand. My problem: I would like the "inputVal" variable found in the "InputField.js" component to be found where there is "!!HERE!!" in the "App.js" component(on the fetch) please help me thank you for reading my message! export default function InputField() { function handleSubmit(e) { // Prevent the browser from reloading the page e.preventDefault(); // Read the form data const form = e.target; const inputVal = form.myInput.value; console.log(inputVal); } return ( <form method="post" onSubmit={handleSubmit}> <input name="myInput" id="adress-field" placeholder="Enter adress" autoComplete="on" /> <button type="submit" id="adress-button">Send</button> </form> ); } import './App.css'; import AccountNumber from "./components/AccountNumber"; import InputField from "./components/InputField"; import { useEffect, useState } from "react" function App() { //token fetch const [tockens, setTockens] = useState([]) const [loading, setLoading] = useState(false) useEffect(() => { setLoading(true) fetch("https://api.multiversx.com/accounts/!!HERE!!/tokens") .then(response => response.json()) .then(json => setTockens(json)) .finally(() => { setLoading(false) }) console.log(tockens); }, []) function round(nr, ten) { // arondi un chiffre. return Math.round(nr * ten) / ten; } function numberWithSpaces(nr) { // formate un chiffre(x xxx xxx). return nr.toString().replace(/\B(?=(\d{3})+(?!\d))/g, " "); } return ( <content className="content"> <div className="up-side"> <div className="account-number-box"> <p id="p-account-number">Total number of accounts</p> <p id="account-number"><AccountNumber/></p> </div> <div className="adress-search"> {InputField()} </div> <p>{window.inputVal}</p> </div> <div className="down-side"> <table className="Token-section-output"> {loading ? ( <div>Loading...</div> ) : ( <> <h1>Tockens</h1> <table className='Token-section-output' border={0}> <tr className='token-row-type'> <th className='token-column'>Name</th> <th className='center-column'>Price</th> <th>Hold</th> </tr> <tr className="space20"/> {tockens.map(tocken => ( <tr className='token-row' key={tocken.id}> <td className='token-column'> <img className="img-Tockens" src = {tocken?.assets?.pngUrl ?? "img/Question.png"} /> <p>{tocken.name}</p> </td> <td className='center-column'> <p>${round(tocken.price, 10000000)}</p> </td> <td> <p>{round(tocken.balance / Math.pow(10, tocken.decimals), 10000000)}</p> <p className='token-hold'>${round(tocken.valueUsd, 10000000)}</p> </td> </tr> ))} </table> </> )} </table> </div> </content> ); } export default App; I not very good in react and i mak search on internet
How to get variable from component for put on the App.js
You want to extend your InputField component to accept a callback function, that can be passed by your app: export default function InputField({onSubmit}) { function handleSubmit(e) { // Prevent the browser from reloading the page e.preventDefault(); // Read the form data const form = e.target; const inputVal = form.myInput.value; console.log(inputVal); onSubmit(inputVal) } ... } And in your App you need to pass that callback to your component: <div className="adress-search"> <InputField onSubmit={handleSearchSubmit} /> </div> Note: Components are not consumed by calling them like functions. In your App logic, you'll need another state to hold your search value: ... const [searchValue, setSearchValue] = useState(null); const handleSearchSubmit = (val) => { setSearchValue(val); } useEffect(() => { setLoading(true) fetch(`https://api.multiversx.com/accounts/${searchValue}/tokens`) .then(response => response.json()) .then(json => setTockens(json)) .finally(() => { setLoading(false) }); console.log(tockens); }, [searchValue]) ...
76381829
76382696
Issue Summary Hi, I have a TypeScript project where I am trying to instantiate a class which was the default export of a different package. I am writing my project in ESM syntax, whereas the package it's dependent upon has CJS output. The issue I am running into is that at runtime, when the flow reaches the point of class instantiation I am getting the following error - new TestClass({ arg1: "Hello, World!" }); ^ TypeError: TestClass is not a constructor Code //My package.json { "name": "myproject", "version": "1.0.0", "main": "dist/index.js", "scripts": { "build": "tsc", "start": "node dist/index.js" }, "type": "module", "dependencies": { "testpackage": "^1.0.0", "typescript": "^5.0.4" }, "devDependencies": { "@types/node": "^20.2.5" } } //My index.ts import TestClass from "testpackage"; new TestClass({ arg1: "Hello, World!" }); //My tsconfig.json { "include": ["src"], "compilerOptions": { "outDir": "dist", "lib": ["es2023"], "target": "es2022", "moduleResolution": "node" } } //Dependency's package.json { "name": "testpackage", "version": "1.0.0", "description": "TestPackage", "main": "./dist/testFile.js", "exports": "./dist/testFile.js", "scripts": { "build": "tsc" }, "files": ["dist"], "devDependencies": { "@types/node": "^20.2.5", "typescript": "^5.0.4" } } //Dependency's testFile.ts export default class TestClass { constructor({ arg1 }: { arg1: string }) { console.log(arg1); } } //Dependency's tsconfig.json { "include": ["src"], "compilerOptions": { "declaration": true, "lib": ["es2023"], "target": "es6", "module": "CommonJS", "outDir": "dist" } } //Dependency's testFile.js output "use strict"; Object.defineProperty(exports, "__esModule", { value: true }); class TestClass { constructor({ arg1 }) { console.log(arg1); } } exports.default = TestClass; Things work fine if I remove "type": "module" from my package.json. They also work fine if the class is a named export instead of a default export in the dependency's code. Is this a known incompatibility when trying to import CJS into ESM or am I doing something incorrectly here? Note - If I set "moduleResolution": "nodenext" in my tsconfig.json then the error is generated at compile time itself - src/index.ts:3:5 - error TS2351: This expression is not constructable. Type 'typeof import("<project_dir>/node_modules/testpackage/dist/testFile")' has no construct signatures. 3 new TestClass({ arg1: "Hello, World!" }); ~~~~~~~~~ Found 1 error in src/index.ts:3
Default class instantiation results in TypeError (ESM/CJS interop)
There are known compatibility issues between CommonJS (CJS) and ECMAScript modules (ESM). In ESM, default exports of CJS modules are wrapped in default properties instead of being exposed directly. On the other hand, named exports are unaffected and can be imported directly. If you specify "type": Specifying "module" in package.json makes Node.js treat the .js file as her ESM. Therefore, you must import the module using the ESM import statement. However, if the module you are trying to import is in CJS format, you will run into compatibility issues. There are several options to fix this. Access the class through the default property as described above. import Test from 'package-name'; const TestClass = Test.default; To avoid problems caused by mixing the two module formats, convert all code to use either ESM or CJS. Load the CJS module using the Node.js createRequire function. import { createRequire } from 'module'; const require = createRequire(import.meta.url); const TestClass = require('package-name');
76381891
76382769
I am getting one string like this in innerHtml <div class="wrapper" [innerHTML]="data.testString"> data.testString contains data like below, data.testString="<p>some info, email <a href="mailto:[email protected]">[email protected]</a> info </p>"; I want to add aria-label for the anchor tag. <a aria-label="[email protected]" href="mailto:[email protected]">[email protected]</a> so I have added below code in .ts file ngAfterViewInit(): void { var myData = this.data.testString; var element = myData!.match!(/href="([^"]*)/)![1]; var ariaLabel = "EmailId " + element; this.data.testString= this.data.testString!.replace('<a', '<a aria-label = "' + ariaLabel + '" '); } } But I am getting below error global-error-handler.ts:26 TypeError: Cannot assign to read only property 'testString' of object '[object Object]' How to resolve this?
Extract href from string and again bind the updated data to the element in Angular
I'm not sure to understand all but below might help you : String character escaping var testString="<p>some info, email <a href="mailto:[email protected]">[email protected]</a> info </p>"; There is a quote issue : a double-quoted string cannot contain double-quotes unless escaped by the \ character (cf. https://www.w3schools.com/js/js_strings.asp -> Escape Character section) You may fix this issue by replacing some double quotes by simple quotes : "<p>some info, email <a href='mailto:[email protected]'>[email protected]</a> info </p>" Regular expression var href = datar!.match!(/href="([^"]*)/)![1]; Not sure this is the pattern you need. Try instead : var href = datar!.match!(/href=\'mailto:([a-z@.]+)'/)![1]; DOM element manipulation About adding the aria-label attribute to the , you would better go with Angular nativeElement.setAttribute(key, value) native javascript querySelector (cf. https://indepth.dev/posts/1336/how-to-do-dom-manipulation-properly-in-angular and https://angular.io/guide/property-binding) Finally I would highly recommend the use of RegExr to test your regular expressions https://regexr.com/7ep0u as well as jsfiddle.net to test your javascript in a safe environment : https://jsfiddle.net/nkarfgx3/
76383765
76383881
I am implementing another instance of the Checkout Session in my app. In my donations controller, the following create action works fine: def create @donation = Donation.create(create_params) if @donation.save if Rails.env.development? success_url = "http://localhost:3000/donations_success?session_id={CHECKOUT_SESSION_ID}" cancel_url = "http://localhost:3000/" elsif Rails.env.production? success_url = "https://www.dbsan.org/donations_success?session_id={CHECKOUT_SESSION_ID}" cancel_url = "https://www.dbsan.org/" end data = { line_items: [{ price_data: { currency: 'usd', product_data: { name: @donation.program }, unit_amount: @donation.amount.to_i }, quantity: 1, }], mode: 'payment', customer_email: @donation.email, success_url: success_url, cancel_url: cancel_url } session = Stripe::Checkout::Session.create(data) redirect_to session.url, allow_other_host: true end end I copied the relevant Stripe part into my participant registration controller: def create @registrant = @challenge.challenge_participants.build(register_params) @registrant.user_id = current_user.id unless @registrant.donations.empty? @registrant.donations.first.user_id = current_user.id @registrant.donations.first.email = current_user.email end if @registrant.save @challenge = @registrant.challenge ChallengeMailer.with(registrant: @registrant).registered.deliver_now if @registrant.price.price == 0 redirect_to challenge_participant_path(@challenge, @registrant) else if Rails.env.development? success_url = "http://localhost:3000/donations_success?session_id={CHECKOUT_SESSION_ID}" cancel_url = "http://localhost:3000/" elsif Rails.env.production? success_url = "https://www.dbsan.org/donations_success?session_id={CHECKOUT_SESSION_ID}" cancel_url = "https://www.dbsan.org/" end data = { line_items: [{ price_data: { currency: 'usd', product_data: { name: "Registration" }, unit_amount: 100 }, quantity: 1, }], mode: 'payment', success_url: success_url, cancel_url: cancel_url } session = Stripe::Checkout::Session.create(data) redirect_to session.url, allow_other_host: true end end The Donations one will redirect to Stripe without issue; however, the registration one, if a pricing selected is greater than 0, it will then attempt to initiate a Stripe Checkout. In my browser console I get a Preflight response was not successful error code 403 with some TypeError that it is not giving me details of. on both of the views, the Stripe API Javascript is included just above the submit button: = javascript_include_tag "https://js.stripe.com/v3" Since I copied the code over from the donations controller, I'm not seeing what my error is. I haven't updated the success_url yet as I'm trying to first get redirected to Stripe. The name and unit_amount are right now hard coded in case my variables aren't working.
One instance of Stripe Checkout works, the other gives a Preflight response code of 403
The code you shared is a simple HTTP redirect server-side in Ruby and shouldn't cause a CORS error in the browser unless your client-side code is making an ajax request instead of a page/form submit. Alternatively, it's possible your form submission is mis-configured and Rails turns this in a turbo request. Adding data-turbo=false to your form might solve that problem.
76385399
76385527
Intersection of date ranges across rows in oracle. I have a table which contains following records Item_no item_type active_from active_to rule_id 10001 SAR 2020-01-01 2023-01-01 rule1 10001 SAR. 2024-01-01 9999-12-31 rule1 10001 SAR 2020-05-01 2021-06-01 rule2 10001 SAR 2021-01-01 2021-02-01 rule2 We need to find common dates between rule ids Output will be Item_no item_type active_from active_to 10001 SAR 2020-05-01 2021-06-01 I tried with connect by level to generate dates and then take intersection, but it is running for long time due to 9999-12-31
intersection across date ranges from multiple rows in oracle
From Oracle 12, you can UNPIVOT the dates and then use analytic functions and MATCH_RECOGNIZE to process the result set row-by-row to find the consecutive rows where both rules are active: SELECT * FROM ( SELECT item_no, item_type, rule_id, dt, SUM(CASE rule_id WHEN 'rule1' THEN active END) OVER ( PARTITION BY item_no, item_type ORDER BY dt, ACTIVE DESC ) AS rule1, SUM(CASE rule_id WHEN 'rule2' THEN active END) OVER ( PARTITION BY item_no, item_type ORDER BY dt, ACTIVE DESC ) AS rule2 FROM table_name UNPIVOT ( dt FOR active IN ( active_from AS 1, active_to AS -1 ) ) ) MATCH_RECOGNIZE( PARTITION BY item_no, item_type ORDER BY dt, rule1 DESC, rule2 DESC MEASURES FIRST(dt) AS active_from, NEXT(dt) AS active_to PATTERN ( active_rules+ ) DEFINE active_rules AS rule1 > 0 AND rule2 > 0 ) Which, for the sample data: CREATE TABLE table_name (Item_no, item_type, active_from, active_to, rule_id) AS SELECT 10001, 'SAR', DATE '2020-01-01', DATE '2023-01-01', 'rule1' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2024-01-01', DATE '9999-12-31', 'rule1' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2020-05-01', DATE '2021-06-01', 'rule2' FROM DUAL UNION ALL SELECT 10001, 'SAR', DATE '2021-01-01', DATE '2021-02-01', 'rule2' FROM DUAL; Outputs: ITEM_NO ITEM_TYPE ACTIVE_FROM ACTIVE_TO 10001 SAR 2020-05-01 00:00:00 2021-06-01 00:00:00 and for: CREATE TABLE table_name (Item_no, item_type, active_from, active_to, rule_id) AS SELECT 10001, 'SPR', DATE '2023-01-01', DATE '2023-01-31', 'rule1' FROM DUAL UNION ALL SELECT 10001, 'SPR', DATE '2023-01-31', DATE '2023-02-27', 'rule2' FROM DUAL; The output is: ITEM_NO ITEM_TYPE ACTIVE_FROM ACTIVE_TO 10001 SPR 2023-01-31 00:00:00 2023-01-31 00:00:00 fiddle
76384723
76385530
My problem is to make array of image by input field and display array images as slider in JavaScript anyone can solve it please answer me Please give code of JavaScript document.querySelector("#a").addEventListener("change", function(){ const reader = new FileReader(); reader.addEventListener("load", ()=>{ localStorage.setItem("recent-image", reader.result) }); reader.readAsDataURL(this.files[0]); }); document.addEventListener("DOMContentLoaded()", ()=> { const imageurl = localStorage.getItem("recent-image"); if(imageurl){ document.querySelector("#b").setAttribute("src", imageurl); } }); can you do this in array please answer me it take only one image image but, I want to store multiple image in local storage by array please answer me
How to make an Array of images getting by input field and display the Array image as slider
The problem that I see is to define your localstorage and make sure that the images are stored in it to later go through each one of them, I leave an example of how to solve it var images = localStorage.getItem('images') || []; function saveImages() { localStorage.setItem('images', JSON.stringify(images)); } function drawImages() { var slider = document.getElementById('slider'); slider.innerHTML = ''; for (var i = 0; i < images.length; i++) { const img = images[i]; const html_img = document.createElement('img'); html_img.src = img; html_img.alt = 'Alt img'; html_img.width = 200; html_img.height = 150; slider.appendChild(html_img); } } document.querySelector("#a").addEventListener("change", function(){ const reader = new FileReader(); reader.addEventListener("load", ()=>{ images.push(reader.result); drawImages(); }); reader.readAsDataURL(this.files[0]); }); document.addEventListener("DOMContentLoaded()", drawImages);
76380801
76382830
[Edit: Updated provided code and compiler error to be easely reproduced] I'm trying to pass an async function item as parameter to an other function in Rust but it won't compile, providing a cryptic error. Here is the code I'm trying to compile. Structure definition (implemented in one crate) pub struct FirstTestComponent { table: Vec<String>, counter: usize, } impl FirstTestComponent { fn render(&mut self) { // Some other irrelevant code record_callback( FirstTestComponent::add_component, ); } } pub struct Services {} impl FirstTestComponent { async fn add_component(&mut self, _: &mut Services) { let counter = self.counter; self.table.push(counter.to_string()); self.counter += 1; } } pub fn record_callback<F, Fut>(callback: F) where F: 'static + Copy + FnOnce(&mut FirstTestComponent, &mut Services) -> Fut, Fut: core::future::Future<Output = ()> + 'static { } fn main() { } When compiling this code, I get the following error: error[E0308]: mismatched types --> src/main.rs:9:9 | 9 | / record_callback( 10 | | FirstTestComponent::add_component, 11 | | ); | |_________^ one type is more general than the other | = note: expected trait `for<'r, 's> <for<'r, 's> fn(&'r mut FirstTestComponent, &'s mut Services) -> impl for<'r, 's> std::future::Future<Output = ()> {FirstTestComponent::add_component} as std::ops::FnOnce<(&'r mut FirstTestComponent, &'s mut Services)>>` found trait `for<'r, 's> <for<'r, 's> fn(&'r mut FirstTestComponent, &'s mut Services) -> impl for<'r, 's> std::future::Future<Output = ()> {FirstTestComponent::add_component} as std::ops::FnOnce<(&'r mut FirstTestComponent, &'s mut Services)>>` note: the lifetime requirement is introduced here --> src/main.rs:30:79 | 30 | F: 'static + Copy + FnOnce(&mut FirstTestComponent, &mut Services) -> Fut, | ^^^ literally saying that something is different than itself... I guess there is some error linked to the lifetime hidden in the error message but I can't figure it out. What could cause this error ? What is wrong with what I've implemented ?
Problem passing an async function item as parameter in Rust
I found the answer here: How to bind lifetimes of Futures to fn arguments in Rust The problem as I understand it is this: From https://rust-lang.github.io/async-book/03_async_await/01_chapter.html: Unlike traditional functions, async fns which take references or other non-'static arguments return a Future which is bounded by the lifetime of the arguments So I had to force the Fut lifetime to be shorter than the input parameters lifetime. The syntax won't let me simply force this by using Higher Ranked Lifetime bounds (as explained in the answer linked above) so I had to use the pattern proposed there: generate a meta trait linking lifetimes as this: trait XFn<'a, T, S> { type Output: Future<Output = ()> + 'a; fn call(&self, this: T, services: S) -> Self::Output; } impl<'a, T: 'a, S: 'a, F, Fut> XFn<'a, T, S> for F where F: 'static + Copy + FnOnce(T, S) -> Fut, Fut: Future<Output = ()> + 'a, { type Output = Fut; fn call(&self, this: T, services: S) -> Fut { self(this, services) } } Then I can use this trait to constraints lifetime on arguments and Future like this: fn record_callback<F>(callback: F) where for<'a> F: XFn<'a, &'a mut FirstTestComponent, &'a mut Services> + 'static + Copy, { }
76383199
76383930
My routes are working and accessing components and loader functions. I'm trying now to pass variable filmsPerPage (which is defined once) to both Home component and the loader function in App.js: const App = () => { const filmsPerPage = 12 const router = createBrowserRouter([ { path: '/', children: [ { index: true, element: <Home {...{filmsPerPage}} />, loader: () => { loaderHome(filmsPerPage) } }, ................. Home.js: const Home = (props) => { const { loaderData } = useLoaderData() // get loader data --> null console.log(props.filmsPerPage) // printing out correctly: 12 ....... } export default Home; export function loaderHome(filmsPerPage) { console.log(filmsPerPage) --> printing out 12 return defer({ loaderData: loadPosts(null, 1, 12, null) }) } The prop filmsPerPage is passing correctly to Home.js component and while it's passing to the loader function, the useLoaderData() in Home.js is returning null which means that although the code in the loader function is working properly, it's not returning a loader object to the component. If I do this in App.js, the useLoaderData() function (in Home.js) will get the data but now the loader function doesn't have the prop: children: [ { index: true, element: <Home {...{filmsPerPage}} />, loader: loaderHome }, How could I pass the filmsPerPage prop to the loader function which will then return loader data to Home.js?
Passing a prop/variable to react router 6 loader function
The loader function isn't returning anything. Perhaps reformatted to a more readable format will make this more apparent: { index: true, element: <Home {...{filmsPerPage}} />, loader: () => { loaderHome(filmsPerPage); // <-- not returned!! }, } The loader should still return the result of calling loaderHome. Examples: { index: true, element: <Home {...{filmsPerPage}} />, loader: () => { return loaderHome(filmsPerPage); // <-- explicit return in function block }, } { index: true, element: <Home {...{filmsPerPage}} />, loader: () => loaderHome(filmsPerPage), // <-- implicit arrow function return } You could even rewrite loaderHome to curry, e.g. close over in function scope, the filmsPerPage argument. export function loaderHome(filmsPerPage) { console.log(filmsPerPage); --> printing out 12 // Return loader function return (loaderArgs) => { return defer({ loaderData: loadPosts(null, 1, filmsPerPage, null); }); }; } { index: true, element: <Home {...{filmsPerPage}} />, loader: loaderHome(filmsPerPage), }
76385338
76385536
I have a small Spring Integration application, I'm storing messages and messaging groups in the database. Currently, I have a case when some messages/groups are waiting to be sent after group timeout, but the application restarted. And when the application started I still have messages in DB and they won't be sent. I need some configuration to send expired message group from DB or resume timer. I tried to use reaper, but it does not work as expected. My code is: @Configuration public class ConsumingChannelConfig { @Bean public DirectChannel consumingChannel() { return new DirectChannel(); } @Bean public KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAdapter() { KafkaMessageDrivenChannelAdapter<String, String> kafkaMessageDrivenChannelAdapter = new KafkaMessageDrivenChannelAdapter<>(kafkaListenerContainer()); kafkaMessageDrivenChannelAdapter.setOutputChannel(consumingChannel()); MessagingMessageConverter messageConverter = new MessagingMessageConverter(); messageConverter.setGenerateMessageId(true); kafkaMessageDrivenChannelAdapter.setRecordMessageConverter(messageConverter); return kafkaMessageDrivenChannelAdapter; } @Bean public DataSource getDataSource() { return ...; } @Bean public JdbcMessageStore jdbcMessageStore() { return new JdbcMessageStore(getDataSource()); } @ServiceActivator(inputChannel = "consumingChannel") @Bean public MessageHandler aggregator() { long timeout = 10000L; AggregatingMessageHandler aggregator = new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(), jdbcMessageStore()); aggregator.setOutputChannel((message, l) -> { System.out.println("MESSAGE: " + message); return true; }); aggregator.setGroupTimeoutExpression(new ValueExpression<>(timeout)); // aggregator.setTaskScheduler(this.taskScheduler); aggregator.setCorrelationStrategy(new MyCorrelationStrategy()); aggregator.setSendPartialResultOnExpiry(true); aggregator.setExpireGroupsUponCompletion(true); aggregator.setExpireGroupsUponTimeout(true); aggregator.setDiscardChannel((message, timeout1) -> { System.out.println("DISCARD: " + message + ", timeout: " + timeout1); return true; }); aggregator.setReleaseStrategy(new ReleaseStrategy() { @Override public boolean canRelease(MessageGroup group) { return System.currentTimeMillis() - group.getTimestamp() >= timeout; } }); return aggregator; } @Bean public MessageGroupStoreReaper reaper() { MessageGroupStoreReaper reaper = new MessageGroupStoreReaper(jdbcMessageStore()); reaper.setPhase(1); reaper.setTimeout(2000L); reaper.setAutoStartup(true); // reaper.setExpireOnDestroy(true); return reaper; } @Bean public ConcurrentMessageListenerContainer<String, String> kafkaListenerContainer() { ContainerProperties containerProps = new ContainerProperties("spring-integration-topic"); return new ConcurrentMessageListenerContainer<>( consumerFactory(), containerProps); } @Bean public ConsumerFactory<String, String> consumerFactory() { return new DefaultKafkaConsumerFactory<>(consumerConfigs()); } @Bean public Map<String, Object> consumerConfigs() { Map<String, Object> properties = new HashMap<>(); properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092"); properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); properties.put(ConsumerConfig.GROUP_ID_CONFIG, "spring-integration"); // automatically reset the offset to the earliest offset properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // DefaultKafkaHeaderMapper mapper = new DefaultKafkaHeaderMapper(); return properties; } } UPD: My Solution @EnableScheduling @SpringBootApplication public class SpringIntegrationExampleApplication { public static void main(String[] args) { SpringApplication.run(SpringIntegrationExampleApplication.class, args); } @Autowired private MessageGroupStoreReaper reaper; @Scheduled(initialDelay = 2000, fixedDelay = Long.MAX_VALUE) public void start() { reaper.run(); } }
How to send Messages after Spring Integration application restarted?
The MessageGroupStoreReaper doesn't work by itself, it has to be called from a @Scheduled method: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#reaper However there is a nice option for you from an aggregator perspective: /** * Perform a {@link MessageGroupStore#expireMessageGroups(long)} with the provided {@link #expireTimeout}. * Can be called externally at any time. * Internally it is called from the scheduled task with the configured {@link #expireDuration}. * @since 5.4 */ public void purgeOrphanedGroups() { You just need to set that expireTimeout > 0: /** * Configure a timeout in milliseconds for purging old orphaned groups from the store. * Used on startup and when an {@link #expireDuration} is provided, the task for running * {@link #purgeOrphanedGroups()} is scheduled with that period. * The {@link #forceReleaseProcessor} is used to process those expired groups according * the "force complete" options. A group can be orphaned if a persistent message group * store is used and no new messages arrive for that group after a restart. * @param expireTimeout the number of milliseconds to determine old orphaned groups in the store to purge. * @since 5.4 * @see #purgeOrphanedGroups() */ public void setExpireTimeout(long expireTimeout) { See also docs on the matter: https://docs.spring.io/spring-integration/docs/current/reference/html/message-routing.html#aggregator-xml Starting with version 5.4, the aggregator (and resequencer) can be configured to expire orphaned groups (groups in a persistent message store that might not otherwise be released).
76381843
76382973
I'm importing a .csv file which I'm then modifying using calculated properties in a PSCustomObject. I'm stuck on one calculation where I'm attempting to lookup a value from a datarow object using one of the .csv values. We receive data with the supplier Part No and I need to lookup our corresponding Part No. Would you be able to suggest how best to do this please? The csv content looks like this: Vendor Code,Part No,Part Description,Bonded,Quantity,PO No,Vendor Ref TEZ,ABC1234,Dark Blue,No,50,4378923,ORD089234 TEZ,BBC1256,Orange,No,20,4378923,ORD089234 TEZ,ACD1349,Green,No,10,4378923,ORD089234 The SQL query $SKUs returns this as datarows: ITEMNO VALUE TYP-5063 ABC1234 TYP-5037 BBC1256 TYP-8069 ACD1349 So I'm looking to use the 'Part No' field from the .csv file to run a lookup against $SKUs.VALUE and return the matching $SKUs.ITEMNO. The output .csv will then include a column called 'OUR_SKU' containing the $SKUs.ITEMNO value. Here is my code so far: $Files = Get-ChildItem -Path "D:\Imports\Test\INVENTORY_HUB_RECEIPTS" $ProcessingPath = "D:\Imports\Test\INVENTORY_HUB_RECEIPTS\Processing\" $UKEntity = "TESTTRG" $HUB_ID = "TEST" $SKUs = Invoke-Sqlcmd -ServerInstance "localhost" -Database "XXXX" -Query "SELECT RTRIM(ITEMNO) AS ITEMNO, RTRIM(VALUE) AS VALUE FROM [XXXX].[dbo].[ICITEMO] WHERE OPTFIELD = 'CUSTITMNO' AND VALUE <>''" foreach ($file in $Files) { $Content = (Import-Csv -path ($ProcessingPath + $file.Name)) | Select-Object @{n='HUB_ID'; e={ $HUB_ID }}, @{e={$_.'Part No'}; l='PART_NO'}, @{e={$_.Quantity}; l='QTY_RECEIVED'}, DATE, @{n='ENTITY'; e={ $UKEntity }}, @{e={$_.'Vendor Ref'.Substring($_.'Vendor Ref'.Length -8)}; l='ORDER_ID'}, @{n='OUR_SKU'; e={ $SKUs | Where-Object {$($_.VALUE) -eq '123ABC'} | Select-Object -ExpandProperty ITEMNO}}, @{n='OUR_SKU_X'; e={ $SKUs | Where-Object {$($_.VALUE) -eq $_.'PART_NO'} | Select-Object -ExpandProperty ITEMNO}} if ($Content.Count -eq 0) {Remove-Item ($ProcessingPath + $file.Name)} else {$Content | Export-Csv -Path ($ProcessingPath + $file.Name) -Not -Force} } I've tried two examples for the new property 'OUR_SKU' this works but is obviously a static value. The property 'OUR_SKU_X' is my attempt to use the supplied $.'PART_NO' and this currently returns a blank field. The variable $SKUs does contain data and so does $.'PART_NO'. I'm thinking it's either a simple syntax error or it's not possible to use $_.'PART_NO' in the script block? Thanks Colin
Powershell - PSCustomObject with Calculated Property
Per comments, inside the where-object scriptblock on the line: $SKUs | Where-Object {$($_.VALUE) -eq $_.'PART_NO'} the automatic variable $_ relates to the individual items piped in from $SKUs, which hides the outer $_ from the $Content = (Import-Csv ...) | Select-Object ... If you want to be able to access the outer $_ inside the where-object you'll need to capture it into a temporary variable like this: e={ $tmp = $_; $SKUs | Where-Object { $_.VALUE -eq $tmp.PARTNO } | Select-Object -ExpandProperty ITEMNO}} Here's a cut-down example: $parts = @" Vendor Code,Part No,Part Description,Bonded,Quantity,PO No,Vendor Ref TEZ,ABC1234,Dark Blue,No,50,4378923,ORD089234 TEZ,BBC1256,Orange,No,20,4378923,ORD089234 TEZ,ACD1349,Green,No,10,4378923,ORD089234 "@ | ConvertFrom-Csv $skus = @" ITEMNO,VALUE TYP-5063,ABC1234 TYP-5037,BBC1256 TYP-8069,ACD1349 "@ | ConvertFrom-Csv $results = $parts | select-object @( @{l="PART_NO"; e={ $_."Part No" } }, @{l="DESC"; e={ $_."Part Description" } }, @{n='OUR_SKU'; e={ $part = $_; $skus | where-object { $_.VALUE -eq $part."Part No" } | Select-Object -ExpandProperty ITEMNO} } ) Note the $part = $_; and $_.VALUE -eq $part."Part No" inside the definition of the third calculated property. The output from the above is: $results PART_NO DESC OUR_SKU ------- ---- ------- ABC1234 Dark Blue TYP-5063 BBC1256 Orange TYP-5037 ACD1349 Green TYP-8069
76383838
76383944
The hover effect is not working in my code. Can someone help?When I run this code there is navbar present but is not clickable whereas the empty space on it's left side is clickable nor are my css hover effect working on it. * { padding: 0; margin: 0; box-sizing: border-box; scroll-behavior: smooth; font-family: 'Poppins', sans-serif; list-style: none; text-decoration: none; } :root { /* global variables */ --main-color: #ff702a; --text-color: #fff; --background-color: #1e1c2a; --big-font: 5rem; --h2-font: 2.25rem; --p-font: 0.9rem; } *::selection { background: var(--main-color); color: #fff; } body { color: var(--text-color); background: var(--background-color); } header { position: fixed; top: 0; left: 0; width: 100%; z-index: 1000; /*z-index defines stack order of element*/ display: flex; align-items: center; /*controls space around cross axis*/ justify-content: space-between; /*controls space around main axis*/ padding: 30px 170px; background: var(--background-color); } .logo { color: var(--main-color); font-weight: 600; font-size: 2.4rem; } .navbar { display: flex; } .navbar li a { color: var(--text-color); font-size: 1.1rem; padding: 10px 20px; font-weight: 500; } .navbar li a:hover { color: var(--main-color); transition: .4s; } <!DOCTYPE html> <html> <head> <meta charset='utf-8'> <meta http-equiv='X-UA-Compatible' content='IE=edge'> <title>Website for Foodies!</title> <meta name='viewport' content='width=device-width, initial-scale=1'> <link rel='stylesheet' type='text/css' media='screen' href='main.css'> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/boxicons@latest/css/boxicons.min.css"> <link rel="preconnect" href="https://fonts.googleapis.com"> <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin> <link href="https://fonts.googleapis.com/css2?family=Nunito+Sans:wght@300;800&family=Poppins:wght@100;200;300;400;500;600;700;800;900&display=swap" rel="stylesheet"> </head> <body> <header> <a href="#" class="logo">Foods</a> <div class="bx bx-menu" id="menu-icon"></div> <!--class=" bx bx-menu" is responsible for icon from boxicon--> <ul class="navbar"> <li><a href="#Home"></a>Home</li> <li><a href="#About"></a>About</li> <li><a href="#Menu"></a>Menu</li> <li><a href="#Service"></a>Service</li> <li><a href="#Contact"></a>Contact</li> </ul> </header> </body> </html> I was trying to make a responsive website and I was expecting the text in my navbar to change color when i hover on it
Hover is not working on navbar items and the cursor is changing to hand when we hover beside text and not when we hover on text
According to your code : .navbar li a:hover { color: var(--main-color); transition: .4s; } you tried to give hover effect on anchor tag as class navbar > li > a. Look at this your html code now: <ul class="navbar"> <li><a href="#Home"></a>Home</li> <li><a href="#About"></a>About</li> <li><a href="#Menu"></a>Menu</li> <li><a href="#Service"></a>Service</li> <li><a href="#Contact"></a>Contact</li> </ul> <a href="#Home"></a> There is nothing to show inside anchor tag. Hope you understand this problem. Put all the menu texts Home about menu service contact inside anchor tag. like this: <a href="#Home">Home</a>
76383381
76383955
I have a dataframe with different combination of factors. each factor is presented in its column (see below) F1 F2 F3 F4 1 1 1 1 1 1 I want to add a new column at the end like below F1 F2 F3 F4 trt 1 1 F1_F2 1 1 F1_F3 1 1 F1_F4 How do I create this column with conditional merging in R. Any advice would be appreciated!
Adding new column for different rows based on the values present in the same row for a different column
aggregate(ind~row, na.omit(cbind(row = c(row(df)), stack(df))), paste, collapse = "_") row ind 1 1 F1_F2 2 2 F1_F3 3 3 F1_F4 df <- structure(list(F1 = c(1L, 1L, 1L), F2 = c(1L, NA, NA), F3 = c(NA, 1L, NA), F4 = c(NA, NA, 1L)), class = "data.frame", row.names = c(NA, -3L))
76385226
76385541
I have this very basic terraform file: main.tf terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 4.16" } } required_version = ">= 1.2.0" } provider "aws" { profile = "default" } resource "aws_s3_bucket" "test-bucket-terraform-regergjegreg" { bucket = "test-bucket-terraform-regergjegreg" tags = { Name = "My bucket" Environment = "Dev" } } when doing terraform validate, I have no error: Success! The configuration is valid. Now when I do terraform plan, I got this error: Planning failed. Terraform encountered an error while generating this plan. ╷ │ Error: configuring Terraform AWS Provider: credential type source_profile requires role_arn, profile default │ │ with provider["registry.terraform.io/hashicorp/aws"], │ on main.tf line 12, in provider "aws": │ 12: provider "aws" { │ ╵ Here is my terraform version: Terraform v1.4.6 on darwin_amd64 + provider registry.terraform.io/hashicorp/aws v3.76.1 the default profile exists in my /.aws/credentials and /.aws/config Not sure what could be wrong really. Any help is appreciated. thanks
why my basic terraform fails to run terraform plan?
Ok, I resolved the issue. My problem was not in credentials file but in config. For some reason I had this: [default] output = json region = eu-west-1 source_profile = default removing source_profile makes it work.
76381780
76383057
I want to visualize a pre-formatted text (with YAML format and indent). It seems that the <|{text}|> markdown pattern and the state representation removes intents from the text, i.e. all becomes a long mashed text. Here is an example output. version: '3.1' stories: - story: 06a6e2c5e8bd4058b304b4f23d57aa80 steps: - intent: bot_capabilities user: What can you do Correct is this: version: '3.1' stories: - story: 06a6e2c5e8bd4058b304b4f23d57aa80 steps: - intent: bot_capabilities user: What can you do Is there a way to keep preformatted text especially with indents? I could not yet find a fitting property for the "text" control. Raw does not seem to solve the issue. If I print the string before assigning it to a state variable, the output format is correct. Therefore, I assume the stripping of empty space happens automatically afterwards.
How to visualize formatted text without removing empty space?
The most straightforward way is to use an input visual element with multiline property turned on. main.py: from taipy.gui import Gui #with open("file.yaml", "r") as f: # yaml_text = f.read() yaml_text = """ version: '3.1' stories: - story: loferum ipsi steps: - intent: bot_capabilities user: What can you do """ page = """ <|{yaml_text}|input|multiline|label=Input|> <|{yaml_text}|input|multiline|not active|label=Inactive input|> <|{yaml_text}|input|multiline|not active|label=Inactive white input|id=yaml|> """ Gui(page).run() Here is the result: The first input is editable The second is not editable and grey The third is not editable and white For the last element, I added a bit of CSS to make the inactive input white: main.css: #yaml.Mui-disabled{ color: white !important; -webkit-text-fill-color: white !important; } An issue has been created on GitHub to improve text visual elements directly.
76383189
76383961
I am trying to search a given directory for a specific file, and if that file does not exist I would want the code to say "File does not exist". Currently with os.walk I can get this to work, however this will hit on every single file that isn't the specified file and print "File dos not exist". I know that this is how os.walk functions, but I was not sure if there is a way to make it only print out once if it is found or not found. Folder structure: root folder| |Project Folder |file.xml |other files/subfolders How I would want the code to work is to go inside of "Project Folder", do a recursive search for "file.xml", and once it is found print out once "Found", otherwise prints out once "Not found". The code is: def check_file(x): #x = root folder dir for d in next(os.walk(x))[1]: #if I understand correctly, [1] will be Project Folder for root, directories, files in os.walk(x): for name in files: if "file.xml" not in name: print("found") else: print("File Missing") If I change the code to for name in files: if "file.xml" in name: print("found") else: pass The code technically works as intended, but it doesn't really do much to help point out if it isn't there, so this isn't a good solution. It would be easier if I was able to give the code a specific path to look in, however as the user is able to place the 'root folder' anywhere on their machine as well as the 'project folder' would have different names depending on the project, I don't think I would be able to give the code a specific location. Is there a way to get this to work with os.walk, or would another method work best?
How can I limit os.walk results for a single file?
The glob module is very convenient for this kind of wildcard-based recursive search. Particularly, the ** wildcard matches a directory tree of arbitrary depth, so you can find a file anywhere in the descendants of your root directory. For example: import glob def check_file(x): # where x is the root directory for the search files = glob.glob('**/file.xml', root_dir=x, recursive=True) if files: print(f"Found {len(files)} matching files") else: print("Did not find a matching file")
76385139
76385542
How can I implement a custom method to check if the user is exists by the given parameter in the url? So I want to make a GET request.
ABAP ODATA Service implement custom GET or POST request
Assumption: you are talking about ABAP implmentation of OData v2 with SEGW approach. You are looking for a so called Function Import. This allows you to define custom functions next to the predefined CRUDQ (Create, Read, Update, Delete, Query) functions. This custom function can have custom input parameters and return values as simple types or Complex Types (=Structures/Entities) As stated also in SAP Help this should be only used if it does not fit into the CRUDQ methods for your entities. Find an example implementation in this SAP Blog For other approaches, e.g. RESTful ABAP Programming Model (RAP), it would be Actions and Validations but cannot generally be answered without details of the scenario.
76380562
76383104
Consider i have a file called developers.txt What i want to do is (and i could do it in git neatly) have 3-4 commits, each adding 1 line per commit commit 2ad54bfe954006bafcb209f06ac0c12091d297c8 (HEAD -> main) Author: Anuraag <[email protected]> Date: Thu Jun 1 15:04:57 2023 +0530 author #3 added developers.txt commit 26cabf7fa07154b19deff41c5cbb07bf0782bde7 Author: Anuraag <[email protected]> Date: Thu Jun 1 15:04:43 2023 +0530 author #2 added developers.txt commit d9e57385b505e2e02a8dc6064466c3974f22e66d Author: Anuraag <[email protected]> Date: Thu Jun 1 15:04:23 2023 +0530 author #1 added developers.txt commit 03e7f4abddfa26b4b0518994c81e0a724ba1a778 Author: Anuraag <[email protected]> Date: Thu Jun 1 15:03:56 2023 +0530 adds heading developers.txt % cat developers.txt Authors: 1. Arthur 2. Canon 3. Doyle What i want to know is, is there such an incremental local workspace development method when using perforce? I want to have SCL#3 to be on top of SCL#2, SCL#1 etc.. but each SCL will be for the same file, in this case developers.txt
Have incremental local perforce scls similar to git for a single file
Yes, this is just basic versioning and should behave similarly across any version control system. Every version builds on the one before it. C:\Perforce\test>echo Authors:>developers.txt C:\Perforce\test>p4 add developers.txt //stream/main/developers.txt#1 - opened for add C:\Perforce\test>p4 submit -d "adds heading" Submitting change 460. Locking 1 files ... add //stream/main/developers.txt#1 Change 460 submitted. C:\Perforce\test>p4 edit developers.txt //stream/main/developers.txt#1 - opened for edit C:\Perforce\test>echo 1. Arthur>>developers.txt C:\Perforce\test>p4 submit -d "author #1 added" Submitting change 461. Locking 1 files ... edit //stream/main/developers.txt#2 Change 461 submitted. C:\Perforce\test>p4 edit developers.txt //stream/main/developers.txt#2 - opened for edit C:\Perforce\test>echo 2. Conan>>developers.txt C:\Perforce\test>p4 submit -d "author #2 added" Submitting change 462. Locking 1 files ... edit //stream/main/developers.txt#3 Change 462 submitted. C:\Perforce\test>p4 edit developers.txt //stream/main/developers.txt#3 - opened for edit C:\Perforce\test>echo 3. Doyle>>developers.txt C:\Perforce\test>p4 submit -d "author #3 added" Submitting change 463. Locking 1 files ... edit //stream/main/developers.txt#4 Change 463 submitted. Now we have our developers.txt with 4 versions. We can see that the head revision contains all 4 changes: C:\Perforce\test>cat developers.txt Authors: 1. Arthur 2. Conan 3. Doyle We can see its history as a list of the changes made to it: C:\Perforce\test>p4 filelog developers.txt //stream/main/developers.txt ... #4 change 463 edit on 2023/06/01 by Samwise@Samwise-dvcs-1509687817 (text) 'author #3 added' ... #3 change 462 edit on 2023/06/01 by Samwise@Samwise-dvcs-1509687817 (text) 'author #2 added' ... #2 change 461 edit on 2023/06/01 by Samwise@Samwise-dvcs-1509687817 (text) 'author #1 added' ... #1 change 460 add on 2023/06/01 by Samwise@Samwise-dvcs-1509687817 (text) 'adds heading' And we can annotate the file to see the content of the file in context of the history, i.e. which revision/changelist added each line of content: C:\Perforce\test>p4 annotate developers.txt //stream/main/developers.txt#4 - edit change 463 (text) 1: Authors: 2: 1. Arthur 3: 2. Conan 4: 3. Doyle C:\Perforce\test>p4 annotate -c developers.txt //stream/main/shelves/developers.txt#4 - edit change 463 (text) 460: Authors: 461: 1. Arthur 462: 2. Conan 463: 3. Doyle
76383470
76383965
My project is located here: D:/WorkSpace/PuzzleApp but when I am calling new File(".").getAbsolutePath(); I get: D:\!Documents\Desktop\. Why? And how to fix this? I'm using Eclipse.
Why by calling new File(".").getAbsolutePath() I get completely different path from my project's location?
When you open File with relative path "." this path is relative to the current process' working directory. Usually, the process working directory is inherited from the parent process (e.g. if you run your app with Terminal - the current terminal's working directory will be your process' working dir). Using a relative path inside your application is considered a bad idea because you can not control this process' working directory. But using an absolute hardcoded path also goes with some problems and can't be called a good practice. There are several ways to solve the issue: Use a path relative to some NOT hardcoded absolute path. This absolute path should be externalized to environment variables, properties files, etc. //Getting the directory path from the external system variable String myAbsoultePath = System.getenv("LOCAL_STORAGE_DIR"); Use a path relative to your application root. It allows you to READ files even located inside .jar or .war archives. //This path looks absolute, but the path's root is your project's root InputStream stream = MyClass.class.getResourceAsStream("/dir/another/file.txt"); But you should understand, that your application won't run inside your IDE project folder. It will be deployed in a production server/user's desktop/android device environment and usually, it is packed in a jar/war/ear/... archive. It means that you won't have access to your src folder or something like this.
76381260
76383270
In our project, after upgrade the SpringBoot from 3.0.4 to 3.0.9, several of our tests started to fail on Caused by: org.springframework.aop.framework.AopConfigException: Unexpected AOP exception at app//org.springframework.aop.framework.CglibAopProxy.buildProxy(CglibAopProxy.java:222) at app//org.springframework.aop.framework.CglibAopProxy.getProxy(CglibAopProxy.java:158) at app//org.springframework.aop.framework.ProxyFactory.getProxy(ProxyFactory.java:110) at app//org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.buildProxy(AbstractAutoProxyCreator.java:517) at app//org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.createProxy(AbstractAutoProxyCreator.java:464) at app//org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.wrapIfNecessary(AbstractAutoProxyCreator.java:369) at app//org.springframework.aop.framework.autoproxy.AbstractAutoProxyCreator.postProcessAfterInitialization(AbstractAutoProxyCreator.java:318) at app//org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:434) at app//org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1773) at app//org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:598) ... 80 more Caused by: java.lang.ClassCastException: class our.project.RepositoryConfiguration$LiquibaseConfiguration$$SpringCGLIB$$0 cannot be cast to class org.springframework.cglib.proxy.Factory (our.project.RepositoryConfiguration$LiquibaseConfiguration$$SpringCGLIB$$0 and org.springframework.cglib.proxy.Factory are in unnamed module of loader 'app') at org.springframework.aop.framework.ObjenesisCglibAopProxy.createProxyClassAndInstance(ObjenesisCglibAopProxy.java:91) at org.springframework.aop.framework.CglibAopProxy.buildProxy(CglibAopProxy.java:213) ... 89 more It's not bounded just to the our.project.RepositoryConfiguration$LiquibaseConfiguration. When I disable this configuration, then similar exception occurs on next configuration. Another weird thing is that the test passes if only the single test class is called or if this class is executed first. Otherwise the ClassCastException ocurs. We use TestNG for testing. I tried to upgrade Spring to version 3.0.6 from 3.0.4 and Spring Boot to version 3.0.6 from 3.0.4 and I expect that all our current tests will pass. But for 34 tests of our ~ 2000 there is an exception java.lang.ClassCastException: class our.project.RepositoryConfiguration$LiquibaseConfiguration$$SpringCGLIB$$0 cannot be cast to class org.springframework.cglib.proxy.Factory (our.project.RepositoryConfiguration$LiquibaseConfiguration$$SpringCGLIB$$0 and org.springframework.cglib.proxy.Factory are in unnamed module of loader 'app')
ClassCastException for configuration CGLIB proxy and org.springframework.cglib.proxy.Factory after upgrade Spring to 6.0.9 and Spring Boot to 3.0.6
Finally. after almost of week of investigations, I managed to isolate the sinner. Solr 8.2.1 causes this issue. With Solr 8.2.0 it works with Solr 8.2.1 there is class cast exception for spring proxies. Hard to believe that this is related.
76380844
76383334
My problem is the following: I can make a figure in which the data is weighed relative to the entire population, but not relative to their own subpopulation. To illustrate with an example: Suppose I have a dataset DS, with two columns: X and type. X is a continues value ranging from -5 to 5, and type is either A, B or C. How would I create a frequency plot of X in which each tuple is weighed by the total of its type, not the total of all tuples in the dataset? This is my closest attempt, yet it weighs to the total population: figure1 <- ggplot(data = DS, aes(x = X))+ geom_freqpoly(aes(colour = type, y= after_stat(count / sum(count)))) + ... Its not surprising that this normalizes to the entire dataset, but I wouldnt know how to get it such that it only normalizes to a subset. Using dput(), I generate the following example dataframe: DS <- structure(list(X = c(0, -0.01, 0.042944432215413, 0.0431301011419889, 0.042944432215413, 0.0424042102083902, 0.2100000012 , 0.13513333335333), TimePoint = c("early", "early", "late", "mid", "mid", "early", "late", "early")), row.names = c(NA,8L), class = "data.frame") In which 'X' is the continous value and 'TimePoint' is the factor which can be either 'early', 'mid' or late.
How can I create a frequency plot/histogram in R using ggplot2 while normalizing to the total of a factor?
One option would be to use e.g. ave() to compute the count per group or Timepoint: library(ggplot2) ggplot(data = DS, aes(x = X)) + geom_freqpoly( aes( colour = TimePoint, y = after_stat(count / ave(count, group, FUN = sum)) ) ) #> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
76385118
76385549
So I was trying to write a quick Batch file but it only did half the code, so I switched it to PowerShell because I remembered a while back I was able to get it to work and turns out this did work. My issue is essentially I have a handful of users I want to have access to this "it just closes a program and reopens". They're familiar with Batch files (which I couldn't get to reopen the program) but would not be used to using a PowerShell script and having to right click to run as PowerShell which I'm expecting will cause issues and many of the users to not use the script in the first place. Is there either something I did wrong on the batch file for it not to reopen or is there a way to change the left click option on the PowerShell(currently left click opens the script in notepad; ideally it would just run as PowerShell on left click) The code is the same for both PowerShell and CMD copied and pasted directly to. taskkill /IM ADM.TrayApp.exe /F Start-Process "C:\Program Files (x86)\athenahealth, Inc\aNetDeviceManager\3.1.4.0\TrayApp\CoreModule\ADM.TrayApp.EXE"
Powershell/CMD issue
You cannot use PowerShell's Start-Process directly in a batch file - you'd have to call via powershell.exe, the Windows PowerShell CLI or pwsh, the PowerShell (Core) CLI). However, as Stephan points out, cmd.exe's internal start command provides similar functionality, so its use should be sufficient in your case (as Stephan notes, some window title enclosed in "..." is needed as the first argument if the executable to launch is enclosed in "..." too; "" will do): start "" "C:\Program Files (x86)\athenahealth, Inc\aNetDeviceManager\3.1.4.0\TrayApp\CoreModule\ADM.TrayApp.EXE" It is possible to make PowerShell scripts execute by default when (double-)left-clicked from File Explorer or the desktop, but requires nontrivial setup on each machine: See this answer. The linked answer also describes an alternative technique of providing simple companion batch files whose sole purpose is to execute an associated PowerShell script.
76380912
76383339
As newbie I prefer use Abs XPath to get find WebElemnts where text is positioned. I tried: List<WebElement> elements = web.findElements(By.xpath("/html[1]/body[1]/div[2]/div[2]/dl[1]/dd[2]/div[2]/div/div[1]/ol[5]/li[1]/div[2]/div/p")); But i failed to catch text under tags with minor changes Target xpaths: /html[1]/body[1]/div[2]/div[2]/dl[1]/dd[2]/div[2]/div[6]/div[1]/ol[5]/li[1]/div[2]/div[4]/div[1]/div[1]/p[1] /html[1]/body[1]/div[2]/div[2]/dl[1]/dd[2]/div[2]/div[6]/div[1]/ol[5]/li[1]/div[2]/div[8]/p /html[1]/body[1]/div[2]/div[2]/dl[1]/dd[2]/div[2]/div[2]/div[1]/ol[5]/li[1]/div[2]/div[3]/h3[1] /html[1]/body[1]/div[2]/div[2]/dl[1]/dd[2]/div[2]/div[2]/div[1]/ol[5]/li[1]/div[2]/div[2]/div[1]/p /html[1]/body[1]/div[2]/div[2]/dl[1]/dd[2]/div[2]/div[5]/div[1]/ol[5]/li[1]/div[2]/div[1]/div[1]/p[1]/strong[1] What is correct formula or way to get all text content in the above mentioned xpaths ?
What is the best way to find text content under relatively indistinguishable tags by Selenium-webdriver?
Not very clear what you want: If you want all elements that contain direct text you could use: /html/body[1]//*[text()[normalize-space()]] this will return all elements with direct text()-nodes that after filtering unnecessary whitespace, have character-data. meaning XPath-parts: // = any descendant; see this info on axes * = any element [some filter] = predicate to filter on direct previous node. [#number] = the position within its siblings. body[1] maybe seems redundant, but can help the XPath-engine not to look any further for other body elements text() = node of type text normalize-space() = strips white-space according this rules
76384278
76385594
I'm moving from a SQL Server database to a DB2 database. I'm trying to compare the primary key of the SQL Server database with the DB2 database and if its the same primary key values then do an update if they are not the same do an insert. My problem is when I use a lookup for the primary key (first four columns) it only returns the first four columns, I need someway of getting the other two columns so I can run a OLE DB command and do the update. I want to compare the four primary keys columns (FISCAL_YR, LOC_CODE, SYSTEM_ID, SYSTEM_CODE) and see if they exist in the destination table if they do I want to update two other columns (LOC_NAME, ALIAS_NAME) and if they don't match I want to do an insert. This is what I have for the lookup currently but it matches all the columns I just want to do the first four (FISCAL_YR, LOC_CODE, SYSTEM_ID, SYSTEM_CODE) but output 6 columns (FISCAL_YR, LOC_CODE, SYSTEM_ID, SYSTEM_CODE, LOC_NAME, ALIAS_NAME) to OLE DB Command so I can do an update: SELECT FISCAL_YR, LOC_CODE, SYSTEM_ID, SYSTEM_CODE, LOC_NAME, ALIAS_NAME FROM LC1U1.Location_Supertbl1; This is the update command I want to execute: UPDATE LC1U1.Location_Supertbl1 SET ALIAS_NAME = UPPER('test') WHERE FISCAL_YR = ? AND LOC_CODE = ? AND SYSTEM_ID = ? AND SYSTEM_CODE = ? How do I do this with a Lookup transformation in SSIS? Thank you.
How to do a lookup for the four primary keys columns yet output 6 columns to OLE DB Command
I think the issue you're not asking for the columns from the lookup component. In the UI, you drag lines between the left (Source) and right (Lookup) side. This defines the equality match for the lookup. What you want to do is check the 2 additional columns in that menu, something like the following image. This will add LOC_NAME and ALIAS_NAME as new columns, after the Lookup Component, into my data flow
76383537
76383985
I want to validate document before it is inserted into the database. I know that I can set static validator, but I would prefer to have a file with the validation schemas that I could modify at any time. //example schema const userCreateValidationSchema = { bsonType: 'object', required: ['username', 'password'], properties: { username: { bsonType: 'string', maxLength: 16, }, password: { bsonType: 'string', maxLength: 64, }, }, additionalProperties: false, }; //example document const document = { username: "user", password: "passwd", }; Then I would do something like validate(document, userCreateValidationSchema). Thanks for any thoughts. I have tried looking for the answer in the documentation but unfortunatelly didn't find the solution.
How can I validate a MongoDB document using a validation schema from a file?
To perform login validation using the npm package Joi :- npm install joi Import Joi and Define Validation Schema :- const Joi = require('joi'); const loginSchema = Joi.object({ email: Joi.string().email().required(), password: Joi.string().min(6).required(), }); In the above example, the validation schema requires the email field to be a valid email address and the password field to have a minimum length of 6 characters. Perform Validation: Now you can use the validate() method provided by Joi for the validation. function validateLogin(loginData) { const { error, value } = loginSchema.validate(loginData); return error ? error.details[0].message : null; } Usage Example: Here's an example of how you can use the validateLogin() function to validate the login data: const loginData = { email: '[email protected]', password: 'password123', }; const validationError = validateLogin(loginData); if (validationError) { console.log('Login validation failed:', validationError); } else { console.log('Login data is valid.'); }
76385098
76385628
I am trying to use an actionbutton in a leaflet popup into a shiny module When trying to use an action button into a leaflet popup in a Shiny module, button do not work. See the exemple below : library(shiny) library(leaflet) library(DT) map_ui <- function(id) { ns <- NS(id) tagList( leafletOutput(ns("mymap")) ) } map_Server <- function(id) { moduleServer( id, function(input, output, session) { mapdata <- datasets::quakes mapdata$latitude <- as.numeric(mapdata$lat) mapdata$longitude <- as.numeric(mapdata$long) mapdata$id <- 1:nrow(mapdata) output$mymap <- renderLeaflet({ leaflet(options = leafletOptions(maxZoom = 18)) %>% addTiles() %>% addMarkers(lat = ~ latitude, lng = ~ longitude, data = mapdata, layerId = mapdata$id, popup= ~paste("<b>", mag, "</b></br>", actionLink(inputId = "modal", label = "Modal", onclick = 'Shiny.setInputValue(\"button_click\", this.id, {priority: \"event\"})'))) }) observeEvent(input$button_click, { showModal(modalDialog( title = "TEST MODAL" )) }) } ) } ui <- fluidPage( map_ui('ex1') ) server <- function(input, output){ map_Server('ex1') } shinyApp(ui, server) Is there any way to make work that button into the module ? I think that it comes that the button is not ns() but i don't find a way to make it works. Thanks
R-Shiny, Use action button into a leaflet popup inside a Shiny module
Yes, you have to add the ns: function(input, output, session) { ns <- session$ns ...... output$mymap <- renderLeaflet({ leaflet(options = leafletOptions(maxZoom = 18)) %>% addTiles() %>% addMarkers( lat = ~ latitude, lng = ~ longitude, data = mapdata, layerId = mapdata$id, popup = ~paste( "<b>", mag, "</b></br>", actionLink( inputId = "modal", label = "Modal", onclick = sprintf( 'Shiny.setInputValue(\"%s\", this.id, {priority: \"event\"})', ns("button_click") ) ) ) ) }) ...... }
76384287
76385632
I am using ggplot2 to visualise map-related data. I have coloured regions according to a continuous value, and I would like to add a legend with colors and region names. My own data is a bit cumbersome to share, but I have recreated the scenario with public data (Mapping in ggplot2). The following code creates the included map: library(ggplot2) library(sf) # Import a geojson or shapefile map <- read_sf("https://raw.githubusercontent.com/R-CoderDotCom/data/main/shapefile_spain/spain.geojson") ggplot(map) + geom_sf(color = "white", aes(fill = unemp_rate)) + geom_sf_text(aes(label = name), size = 2) Instead of the continuous default legend, I would like to have a legend with names, numbers and colors. Basically, a legend that shows the name and unemp_rate columns of the data with colors matching the map (eg. unemp_rate). Somewhat like the legend of the second included picture (but the colors are not right). name unemp_rate "Andalucía" 18.68 "Aragón" 8.96 "Principado de Asturias" 11.36 "Islas Baleares" 9.29 "Islas Canarias" 17.76 "Cantabria" 8.17 "Castilla y León" 10.19 "Castilla-La Mancha" 14.11 "Cataluña" 9.29 "Comunidad Valenciana" 12.81 "Extremadura" 16.73 "Galicia" 11.20 "Comunidad de Madrid" 10.18 "Región de Murcia" 12.18 "Comunidad Foral de Navarra" 8.76 "País Vasco" 8.75 "La Rioja" 10.19 "Ceuta y Melilla" 23.71 My actual code looks like so: ggplot(map, aes(geometry = geometry, fill = Y1)) + theme_bw() + geom_sf(show.legend = FALSE) + scale_fill_gradient2(low = "brown", high = "green") + theme( axis.title = element_blank(), axis.text = element_blank(), axis.ticks = element_blank())
Adding a legend to a ggplot map
Perhaps an inset bar chart instead: library(ggplot2) library(sf) library(dplyr) library(patchwork) # Import a geojson or shapefile map_ <- read_sf("https://raw.githubusercontent.com/R-CoderDotCom/data/main/shapefile_spain/spain.geojson") %>% mutate(name = forcats::fct_reorder(name, desc(unemp_rate))) g1 <- map_ %>% ggplot() + geom_sf(color = "white", aes(fill = unemp_rate)) + geom_sf_text(aes(label = name), size = 2) + scale_fill_gradient2(low = "brown", high = "green", midpoint = 16) + theme_minimal() + theme(legend.position = "none", axis.text = element_blank(), axis.title = element_blank(), panel.grid = element_blank()) g2 <- map_ %>% ggplot(aes(x = unemp_rate, y = name, fill = unemp_rate)) + geom_col() + scale_fill_gradient2(low = "brown", high = "green", midpoint = 16) + geom_text(aes(label = name, x = .5, hjust = 0)) + geom_text(aes(label = unemp_rate), nudge_x = - .5, hjust = 1) + theme_void() + theme(legend.position = "none") g1 + inset_element(g2, 0, .2, .9, 1)