QuestionId
stringlengths
8
8
AnswerId
stringlengths
8
8
QuestionBody
stringlengths
91
22.3k
QuestionTitle
stringlengths
17
149
AnswerBody
stringlengths
48
20.9k
76390268
76390820
Correct way to check redis key's type in lua script. I'm trying to wrap my head around redis lua scripting and I can't find the correct way to check what type the key has. Here is what I've tried: 127.0.0.1:6379> SET test_key test_value OK 127.0.0.1:6379> GET test_key "test_value" 127.0.0.1:6379> EVAL 'local type = redis.call("TYPE", KEYS[1]); return type' 1 test_key string So I see that the type = "string", but: 127.0.0.1:6379> EVAL 'local type = redis.call("TYPE", KEYS[1]); local res; if type == string then res = "ok" else res = "not ok" end return res' 1 test_key "not ok" 127.0.0.1:6379> EVAL 'local type = redis.call("TYPE", KEYS[1]); local res; if type == string then res = "ok" else res = "not ok" end return res' 1 "test_key" "not ok" 127.0.0.1:6379> EVAL 'local type = redis.call("TYPE", KEYS[1]); local res; if type == "string" then res = "ok" else res = "not ok" end return res' 1 "test_key" "not ok" 127.0.0.1:6379> EVAL 'local type = redis.call("TYPE", KEYS[1]); local res; if type == "string" then res = "ok" else res = "not ok" end return res' 1 test_key "not ok"
How can I correctly check the type of a Redis key in Lua scripting?
I've found the answer here: Using the TYPE command inside a Redis / Lua Script Short answer is that in lua scripts redis.call("TYPE", key) returns not string but lua table with key "ok" which holds string value of the type. So to check the type of the key you should compare like this: if redis.call("TYPE", key)["ok"] == "string" for example: 127.0.0.1:6379> EVAL 'local type = redis.call("TYPE", KEYS[1])["ok"]; local res; if type == "string" then res = "ok" else res = "not ok" end return res' 1 test_key "ok"
76392221
76392309
I am currently using Redis as a vector database and was able to get a similarity search going with 3 dimensions (the dimensions being latitude, longitude, and timestamp). The similarity search is working but I would like to weigh certain dimensions differently when conducting the search. Namely, I would like the similarity search to prioritize the timestamp dimension when conducting the search. How would I go about this? Redis does not seem to have any built-in feature that does this. I turn each set of lat, long, and time coordinates into bytes that can be put into the vector database with the following code. Note that vector_dict stores all the sets of lat, long, and timestamp: p = client.pipeline(transaction=False) for index in data: # create hash key key = keys[index] # create hash values item_metadata = data[index] # copy all metadata item_key_vector = np.array(vector_dict[index]).astype(np.float32).tobytes() # convert vector to bytes p.hset(key, mapping=item_metadata) # add item to redis using hash key and metadata I then conduct the similarity search using the HNSW index here: def create_hnsw_index(redis_conn, vector_field_name, number_of_vectors, vector_dimensions=3, distance_metric='L2', M=100, EF=100): redis_conn.ft().create_index([ VectorField(vector_field_name, "HNSW", {"TYPE": "FLOAT32", "DIM": vector_dimensions, "DISTANCE_METRIC": distance_metric, "INITIAL_CAP": number_of_vectors, "M": M, "EF_CONSTRUCTION": EF}) ]) I talked with others and they said it is a math problem that deals with vector normalization. I'm unsure how to get started with this though in code and would like some guidance.
How can I prioritize dimensions in a Redis vector similarity search?
You can re-weight the vector to make certain dimensions longer than others. You're using an L2 distance metric. That uses the standard Pythagorean theorem to calculate distance: dist = sqrt((x1-x2)**2 + (y1-y2)**2 + (z1-z2)**2) Imagine you multiplied every Y value, in both your query and your database, by 10. That would also multiply the difference between Y values by a factor of 10. The new distance function would effectively be this: dist = sqrt((x1-x2)**2 + (10*(y1-y2))**2 + (z1-z2)**2) dist = sqrt((x1-x2)**2 + 100*(y1-y2)**2 + (z1-z2)**2) ...which makes the Y dimension matter 100 times more than the other dimensions. So if you want the dimension in position 2 to matter more, you could do this: item_key_vector = np.array(vector_dict[index]) item_key_vector[2] *= 10 item_key_vector_bytes = item_key_vector.astype(np.float32).tobytes() The specific amount to multiply by depends on how much you want the timestamp to matter. Remember that you need to multiply your query vector by the same amount.
76388796
76390832
I am trying to build a library using this C++ code: #include <pybind11/pybind11.h> namespace py = pybind11; PyObject* func() { return Py_BuildValue("iii", 1, 2, 3); } PYBIND11_MODULE(example, m) { m.def("func", &func); } But when I tried to run this Python 3 code: import example print(example.func()) It gives me following error: TypeError: Unregistered type : _object The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> TypeError: Unable to convert function return value to a Python type! The signature was () -> _object Why it is happening and how to fix it?
Why pybind11 can not recognize PyObject* as a Python object and how to fix it?
So, first off, what are you trying to do with your function? Making a raw python API call (Py_BuildValue, https://docs.python.org/3/c-api/arg.html#c.Py_BuildValue) is strongly discouraged, that is why you are using PyBind11, to handle that for you. What are you trying to do with this line? Py_BuildValue("iii", 1, 2, 3) It looks like you are trying to return a tuple of 3 ints. Perhaps something like this would work instead: py::tuple func() { return py::make_tuple(1, 2, 3); } With that said, I think the error is from Python not understanding what a PyObject* is. So you will need to expose the PyObject to Python using a py::class. I'm not sure if that makes sense though, I would need to test this. From the docs I linked, it looks like Py_BuildValue might return a tuple. In which case I can suggest wrapping the return value in a py::tuple.
76391978
76392317
I'm trying to shade a scatter plot based on time of day, but my data is in datetime format. Getting either time of day or hours past 00:00 would work. I tried to get just the time from datetime, but I got the following error. TypeError: float() argument must be a string or a number, not 'datetime.time' Ideally, I'll have a scatterplot shaded based on time of day. I initially tried this (also added .values to the end of the dt.time to see if it would help. It didn't). x = dfbelow[1:].wind_speed.values y = above_aligned["WS"].values plt.scatter(x, y, s=20, c=above_aligned["timestamp"].dt.time, cmap='YlOrBr') plt.xlabel("Below Canopy Wind Speed (m/s)") plt.ylabel("Above Canopy Wind Speed (m/s)") plt.title("Above vs. Below Canopy Wind Speeds (m/s)") plt.colorbar(label="Below Canopy Wind Direction") plt.show But it understandably can't shade based off of a datetime form.
How to get time from datetime into a string/number for a colormap
I figured it out and it was actually very simple. Instead of above_aligned["timestamp"].dt.time, I just used above_aligned["timestamp"].dt.hour.
76390504
76390835
Is it possible to access a public domain e.g. foo.bar.com from an AWS ECS task running on a private subnet? It is convenient for me that the task is running on a private subnet since it can be easily accessible from other ECS tasks running on the same private subnet (same AZ, VPC and region). I am reading different contradicting opinions: Some say that this is not possible and I must configure a NAT Gateway or NAT instance Others say that this is possible as long as you specify the right outbound rules for your security groups. But outbound rules can only be configured using IP ranges and not specific domains. What is actually the case here?
AWS ECS Task on private subnet connectivity
You have to create a NAT Gateway and add a route in the route table for your subnet. Without a NAT Gateway (or NAT instance, but those are considered obsolete), you cannot connect to the public internet from your private subnets.
76391183
76392333
In the following PowerShell code, functions hour00_numbers and hour01_numbers create two arrays, that contain numbers matching a specific regex. function hour00_numbers { foreach ($event in $filter00) { $hour00Numbers = @() $hour00Numbers += [regex]::Matches($event.Source.Summary -replace ' ', '9 *[45679]( *[0-9]){6}').Value $hour00Numbers } } function hour01_numbers { foreach ($event in $filter01) { $hour01Numbers = @() $hour01Numbers += [regex]::Matches($event.Source.Summary -replace ' ', '9 *[45679]( *[0-9]){6}').Value $hour01Numbers } } I then check if both arrays are not empty, and if they are not, I check to see if they contain matching numbers. function matching_numbers01 { if ($null -ne $(hour00_numbers) -and $null -ne $(hour01_numbers)) { $script:matchingNumbers01 = Compare-Object $(hour00_numbers) $(hour01_numbers) -IncludeEqual | Where-Object { $_.SideIndicator -eq '==' } | Select-Object -ExpandProperty InputObject } else { $matchingNumbers01 = $null } if ($null -ne $(hour00_numbers)) { Write-Host "hour00 numbers: $(hour00_numbers)" -ForegroundColor Green } else { Write-Host "hour00 is empty" -ForegroundColor Yellow } if ($null -ne $(hour01_numbers)) { Write-Host "hour01 numbers: $(hour01_numbers)" -ForegroundColor Green } else { Write-Host "hour01 is empty" -ForegroundColor Yellow } if ($null -ne $matchingNumbers01) { Write-Host "matching numbers are: $matchingNumbers01" -ForegroundColor Yellow } else { Write-Host "no matching numbers found" -ForegroundColor Green } } I'm doing something wrong here though, because I will frequently get a message saying Cannot bind argument to parameter 'ReferenceObject' because it is null If either of the two arrays is empty, it shouldn't compare anything. It should only compare the arrays if both are not empty. What I'm I doing wrong here?
compare two PowerShell arrays only if they both contain values
Aside from the problem you've reported, your code has a few other issues: Use of global variables to pass values into (and out of) functions - e.g. $filter00, $filter01 and $matchingNumbers01. It's hard to reason about these in isolation because they depend on program state that exists outside their scope. Duplication of code as a result - e.g. the functions hour00_numbers and hour01_numbers are identical except for which global variable they reference - this makes maintenance harder. Performance issues / race condition by repeatedly calling the hour00_numbers and hour01_numbers functions inside matching_numbers01 instead of caching the results into internal variables. So, here's a refactor that uses function parameters and output streams to address these problems. It also uses some of the feedback in the comments, and it might just accidentally fix your root problem as a side-effect as well... # define a function to filter the events, give it a meaningful name, and pass # the raw event list as a parameter so we only need one version of the function function Get-HourEvents { param( $Events ) foreach($event in $Events) { [regex]::Matches($event.Source.Summary -replace ' ', '9 *[45679]( *[0-9]){6}').Value } } # pass the filtered events in as parameters '$Left' and '$Right', # and return the results in the output stream so we don't need to # read global variables. this makes it easier to reason about, and # a lot easier to test with frameworks like pester function Get-MatchingEvents { param( $Left, $Right ) if ($null -eq $Left) { Write-Host "Left is empty" -ForegroundColor Yellow } else { Write-Host "Left numbers: $Left" -ForegroundColor Green } if ($null -eq $Right) { Write-Host "Right is empty" -ForegroundColor Yellow } else { Write-Host "hour01 numbers: $Right" -ForegroundColor Green } if (($null -eq $Left) -or ($null -eq $Right) { $results = $null } else { $results = Compare-Object $Left $Right -IncludeEqual | Where-Object { $_.SideIndicator -eq '==' } | Select-Object -ExpandProperty InputObject } if ($null -eq $results) { Write-Host "no matching numbers found" -ForegroundColor Green } else { Write-Host "matching numbers are: $results" -ForegroundColor Yellow $results } } # use the functions like this: # get the filtered event lists $hour00Events = Get-HourEvents -Events $filter00 $hour01Events = Get-HourEvents -Events $filter01 # match the event lists and assign the result to a variable $matchingNumbers01 = Get-MatchingEvents -Left $hour00Events -Right $hour01Events I also took the liberty of inverting some of your if( ... -ne ... ) { ... } else { ... } as I find it easier to think about when the else will trigger if the if uses a "positive" expression otherwise the else condition becomes a double-negative, but that's just personal style - ymmv. If you're still seeing the same issue with this version of your code feel free to post a comment below...
76388777
76390883
I am trying to retrieve the meta data for a given links (url). I have implemented the following steps: $url = "url is here"; $html = file_get_contents($url); $crawler = new Crawler($html); // Symfony library $description = $crawler->filterXPath("//meta[@name='description']")->extract(['content']); Doing so, I manage to retrieve the meta data for some urls but not for all. Some urls, the file_get_contents($url) function returns special characters like (x1F‹\x08\x00\x00\x00\x00\x00\x04\x03ì½}{ãÆ‘/ú÷øSÀœ'\x1E)! ‘z§¬qlÇI..........) that is why I could not retrieve the meta data. Notice that, I am using the same website for $url values but passing different slugs (different blog urls like https://www.example.com/blog-1). Attempts: I used these functions mb_convert_encoding and mb_detect_encoding I made sure all urls I have passed are accessible through the browser. Any thought, why I am getting special characters when I am calling file_get_contents function, and some time getting correct html format?
PHP function file_get_contents($url) returns special characters
I have solved the issue by adding the following parameters to file_get_contents functions: private const EMBED_URL_APPEND = '?tab=3&object=%s&type=subgroup'; private const EMBED_URL_ENCODE= 'CM_949A11_1534_1603_DAG_DST_50_ÖVRIGT_1_1'; $urlEncoded= sprintf($url.self::EMBED_URL_APPEND, rawurlencode(self::EMBED_URL_ENCODE)); $html = file_get_contents($urlEncoded);
76390329
76390859
I have data plotted as points and would like to add density plots to the graph. The marginal plot solutions from ggExtra or other packages are not giving the freedom that I'd like and so want to generate the density plot at the same time as the ggplot. df = data.frame(x = rnorm(50, mean = 10), y = runif(50, min = 10, max = 20), id = rep(LETTERS[1:5], each = 10)) ggppp = ggplot(data = df, mapping = aes(x, y, color = id)) + geom_point() + theme_bw() ggppp + geom_density(mapping = aes(y = y, col = id), inherit.aes = FALSE, bounds = c(-Inf, Inf)) + geom_density(mapping = aes(x = x, col = id), inherit.aes = FALSE, ) Is there a way to move the density plots to other values of x or y position (like moving the density lines to the tip of the arrow in the image below)?
Change x or y position of density plot
you can shift the position with position_nudge: ## using your example objects: ggppp + geom_density(mapping = aes(y = y , col = id), position = position_nudge(x = 12), inherit.aes = FALSE ) + geom_density(mapping = aes(x = x, col = id), position = position_nudge(y = 20), inherit.aes = FALSE )
76391826
76392344
I'm trying to create a Snackbar in my Android Java application. It has an action, displayed as Cancel, that should stop (or return) the parent method. I tried this: snackbar.setAction("Cancel", v -> { return; }); But Android Studio told me that 'return' is unnecessary as the last statement in a 'void' method showing me that this was returning from the lambda expression, not it's parent method. I also tried super.return;, but that caused a whole lot of errors.
Lambda expression returns parent method
NB: This answer applies generally to UI frameworks/java, not android in particular. What you want to do here makes fundamentally no sense. The setAction method is telling the snackbar object: Hey, whenever the "Cancel" event occurs, run this code. Don't run it now, run it later. Maybe a year later, when the user bothers to click that cancel button, when this method is long gone - this method where I am informing you what to do when the user clicks that Cancel button, NOT actually doing the act that must be done when the user clicks that, maybe they never click it, after all! Hence, 'return from this method' is fundamentally nonsensical. How? That method's probably already done ages ago. After telling the object referred to by the snackbar variable what to do when the user presses cancel (which is a near instant operation, of course, and requires no user interaction or anything else that would take more than nanoseconds), this method will keep going. It sounds like you are a bit confused about how to set up these actions. Taking a bit of a guess on how this works, there are in broad strokes two obvious things you might do: Have a dialog with OK and Cancel buttons Nothing should happen until a user clicks one of the two buttons. Once they do, it happens and they can no longer stop that short of hard-killing your app or force-shutting down the phone. In this case, you should have one call to .setAction("Cancel", ...) and one call to .setAction("OK", ....) and that's that. The cancel button just needs to dismiss the dialog and do nothing else. Have a dialog with perhaps a progress bar and a cancel button As in, the act occurs right now but will take a while, and you want to offer the user a button to tell your application to abort what it is doing. The dialog is explaining (via a progress bar, spinner, or simply with text) that the act is occurring now and whatever that act may be (say, send an email), once it is done, this dialog dismisses itself (and it is at that point no longer possible to cancel it; possibly it can be undone, but that'd be a separate UI action). In this case: You can't just 'kill' a thread mid-stride in java. After all, just hard-killing one process of an app can (and often will) leave stuff in undefined state. Create some concurrency-capable mechanism (in basis, reading the same field from 2 different threads is just broken, because CPUs are mostly self-contained units and forcing them to communicate every change across all cores means there is pretty much no point to multiple cores in the first place, hence, software does not guarantee that writes to a field are seen by other threads unless you explicitly spell it out). Then use that to communicate to the running code that it should abort. that code should then dismiss the dialog. The general process for the 'cancel' action code is: Disable the button, both actually (it should no longer invoke the cancel handler), and visually (so the user knows their click is now being handled). Set the concurrency capable flag. Might take the form of interrupting a thread, or setting some AtomicBoolean. that's it. Do nothing else. Leave the dialog up as normal. The code that does the slow thing (say, sending mail) should: Set up a system that listens to that flag to abort as soon as it can. How to do this is tricky and depends on what, precisely, you are doing. Once it sees that flag being up / catches InterruptedException, aborts the act, undoes whatever half-work it has done if it can, and it dismisses the dialog entirely. This then lets the user know the act of aborting it has succeeded.
76380894
76391234
I'm working on a GWT application using Domino/UI, Nalukit and the Javascript plugin FullCalendar v6. I made a custom popup to modify and delete an event but when I validate the form, my calendar refreshes and all the event in my week view disappear. Demo of the app running I used the native function gotoDate to change to view of the calendar to the event's modified date. Here's a sample from my controller's render and refreshCalendar methods : @Override public void render() { // Styling related lines omitted FcOptionOverrides mainOptions = new FcOptionOverrides(); mainOptions.locale = "fr"; mainOptions.initialView = "timeGridWeek"; mainOptions.views = new FcViewOptions(); mainOptions.views.timeGridWeek = new FcView(); mainOptions.views.timeGridWeek.weekends = true; mainOptions.views.timeGridWeek.slotMinTime = "07:00:00"; mainOptions.views.timeGridWeek.slotMaxTime = "22:00:00"; mainOptions.eventSources = new FcEventSources(); mainOptions.eventSources.events = this.getController()::onEventNeedMain; mainOptions.datesSet = this::onDatesSet; mainOptions.eventDidMount = this::onEventDidMount; mainOptions.dateClick = this::onBigCalendarDateClick; mainCalendar = new FcCalendar(mainAgendaContainer, mainOptions); FcOptionOverrides smallOptions = new FcOptionOverrides(); smallOptions.locale = "fr"; smallOptions.initialView = "dayGridMonth"; smallOptions.height = "330px"; smallOptions.aspectRatio = 1.0f; smallOptions.eventSources = new FcEventSources(); smallOptions.eventSources.events = this.getController()::onEventNeedSmall; smallOptions.dateClick = this::onSmallCalendarDateClick; smallCalendar = new FcCalendar(smallAgendaContainer, smallOptions); // radio button for displaying week-ends RayflexRadio displayWeekendRadio = new RayflexRadio("weekends", "Week-ends", "Afficher", "Masquer"); displayWeekendRadio.style("position:absolute; top:18px; right:230px;"); displayWeekendRadio.addChangeHandler(event -> { if (displayWeekendRadio.getValue()) { DominoElement.of(mainAgendaContainer).removeCss(RxResource.INSTANCE.gss().calendar_main_no_weekends()); DominoElement.of(mainAgendaContainer).css(RxResource.INSTANCE.gss().calendar_main()); } else { DominoElement.of(mainAgendaContainer).removeCss(RxResource.INSTANCE.gss().calendar_main()); DominoElement.of(mainAgendaContainer).css(RxResource.INSTANCE.gss().calendar_main_no_weekends()); } displayWeekEnds = displayWeekendRadio.getValue(); refreshCalendar(); }); displayWeekendRadio.setValue(displayWeekEnds); card.getBody().appendChild(displayWeekendRadio.element()); initElement(card.element()); } @Override public void refreshCalendar() { if (lastModifiedEvent != null) { Date beginDate = lastModifiedEvent.getBeginDate(); JsDate jsDate = new JsDate(1900 + beginDate.getYear(), beginDate.getMonth(), beginDate.getDate()); mainCalendar.gotoDate(jsDate); smallCalendar.gotoDate(jsDate); } mainCalendar.refetchEvents(); } Here's my wrapper class for FullCalendar's JS functions : package com.alara.rayflex.ui.client.calendar; import elemental2.core.JsDate; import elemental2.dom.Element; import jsinterop.annotations.JsType; @JsType(isNative = true, namespace = "FullCalendar", name="Calendar") public class FcCalendar { public FcCalendar(Element root) {} public FcCalendar(Element root, FcOptionOverrides optionOverrides) {} public native void render(); public native void updateSize(); public native void gotoDate(JsDate start); public native JsDate getDate(); public native void setOption(String name, String value); public native void setOption(String name, int value); public native void select(JsDate date); public native void refetchEvents(); public native void addEventSource(FcOnEventNeed needEvent); public native void changeView(String viewName, JsDate dateOrRange); } I tried to force the native javascript functions refetchEvents with gotoDate but I got the same result. Then I tried using addEventSource to restore my events but still no success there. I'm expecting to rerender my calendar with the events of the week which the event has been modified.
FullCalendar in GWT : How to refresh calendar while keeping events
Solved my issue by loading the events in a callback and refetching them in my refreshCalendar method : @Override public void refreshCalendar() { if (lastModifiedEvent != null) moveToLastEventModifiedDate(); else mainCalendar.refetchEvents(); } /** * Use the last modified event to move the calendar to its begin date * then fetch the events to load them again in the calendar component */ public void moveToLastEventModifiedDate() { Date eventDate = lastModifiedEvent.getBeginDate(); JsDate jsEventDate = new JsDate(1900 + eventDate.getYear(), eventDate.getMonth(), eventDate.getDate()); Date previousMonday = DateUtils.getPreviousMonday(eventDate); Date nextMonday = DateUtils.getNextMonday(eventDate); JsDate jsDateBegin = new JsDate(1900 + previousMonday.getYear(), previousMonday.getMonth(), previousMonday.getDate()); JsDate jsDateEnd = new JsDate(1900 + nextMonday.getYear(), nextMonday.getMonth(), nextMonday.getDate()); mainCalendar.gotoDate(jsEventDate); smallCalendar.select(jsEventDate); FcEventFetchInfo info = new FcEventFetchInfo(); info.start = jsDateBegin; info.end = jsDateEnd; this.getController().onEventNeedMain(info, success -> { mainCalendar.setOption("events", fcOnEventNeed); mainCalendar.refetchEvents(); }, failure -> { ErrorManager.displayServerError("event.list", failure.message, getController().getEventBus()); }); lastModifiedEvent = null; }
76390427
76390861
My goal is to evaluate a basic symbolic equation such as ad(b + c) with my own custom implementaions of multiply and addition. I'm trying to use lambdify to translate the two core SymPy functions (Add and Mul) with my own functions, but I cant get them recognised. At this stage I'm just trying to get Add working. The code I have is below. from sympy import * import numpy as np x, y = symbols('x y') A = [1,1] B = [2,2] def addVectors(inA, inB): print("running addVectors") return np.add(inA, inB) # Test vector addition print(addVectors(A,B)) # Now using lambdify f = lambdify([x, y], x + y, {"add":addVectors}) print(f(A, B)) # <------- expect [3,3] and addVectors to be run a second time # but I get the same as this print(A + B) which yields running addVectors [3 3] [1, 1, 2, 2] [1, 1, 2, 2] I was expecting the + operator in the expression to be evaluated using the custom addVectors function. Which would mean the results looks like this. running addVectors [3 3] running addVectors [3 3] [1, 1, 2, 2] I tried several different configurations of the lambdify line and these all give the same original result. f = lambdify([x, y], x + y, {"add":addVectors}) f = lambdify([x, y], x + y, {"Add":addVectors}) f = lambdify([x, y], x + y, {"+":addVectors}) f = lambdify([x, y], Add(x,y), {"Add":addVectors}) f = lambdify([x, y], x + y) To confirm I have the syntax correct I used an example closer to the documentation and replaced the symbolic cos function with a sin implementation. from sympy import * import numpy as np x = symbols('x') def mysin(x): print('taking the sin of', x) return np.sin(x) print(mysin(1)) f = lambdify(x, cos(x), {'cos': mysin}) f(1) which works as expected and yields taking the sin of 1 0.8414709848078965 taking the sin of 1 0.8414709848078965 Is it even possible to implement my own Add and Mul functions using lambdify? I suspect my trouble is Add (and Mul) are not SymPy 'functions'. The documentation refers to them as an 'expression' and that somehow means they dont get recognised for substitution in the lambdify process. Some links that I've been reading: SymPy cos SymPy Add SymPy Lambdify Any pointers would be appreciated. Thanks for reading this far. EDIT: Got a more general case working This uses a combination of the lambdify and replace functions to replace Add and Mul. This example then evaluates an expression in the form ad(b + c), which was the goal. from sympy import * import numpy as np w, x, y, z = symbols('w x y z') A = [3,3] B = [2,2] C = [1,1] D = [4,4] def addVectors(*args): result = args[0] for arg in args[1:]: result = np.add(result, arg) return result def mulVectors(*args): result = args[0] for arg in args[1:]: result = np.multiply(result, arg) return result expr = w*z*(x + y) print(expr) expr = expr.replace(Add, lambda *args: lerchphi(*args)) expr = expr.replace(Mul, lambda *args: Max(*args)) print(expr) f = lambdify([w, x, y, z], expr, {"lerchphi":addVectors, "Max":mulVectors}) print(f(A, B, C, D)) print(mulVectors(A,D,addVectors(B,C))) which yields w*z*(x + y) Max(w, z, lerchphi(x, y)) [36 36] [36 36] A few things to note with this solution: Using the replace function you can replace a type with a function (type -> func). See the docs. The function I replace the types with have to accept multiple inputs because each type in the expression may have more than two arguments (like multiply in the example above). I only found 3 functions that accept *args as an input. These were Min, Max and lerchphi. SymPy simplifies Min and Max functions since Max(x, Min(x, y)) = x. That meant I couldn't use Min and Max together. So I used lerchphi and Max. These functions are arbitary as I'll be translating their implementation to a custom function in the next step. However, this means I can only replace two. Final step was to translate lerchphi and Max to the custom functions.
Can SymPy Lambdify translate core functions like Add and Mul?
With sympy, addition is an operation. Hence, I'm not sure if it's possible to achieve your goal by passing in custom modules... However, at the heart of lambdify there is the printing module. Essentially, lambdify uses some printer to generate a string representation of the expression to be evaluated. If you look at lambdify's signature, you'll see that it's possible to pass a custom printer. Given a printer class, the addition with + is performed by the _print_Add method. One way to achieve your goal is to modify this method of the NumPyPrinter. from sympy.printing.lambdarepr import NumPyPrinter import inspect class MyNumPyPrinter(NumPyPrinter): def _print_Add(self, expr, **kwargs): str_args = [self.doprint(t) for t in expr.args] return "add(*[%s])" % ", ".join(str_args) f = lambdify([x, y], x + y, printer=MyNumPyPrinter) print(inspect.getsource(f)) # def _lambdifygenerated(x, y): # return add(*[x, y]) print(f(A, B)) # [3 3] Note that I've no idea what implication this might creates. That's for you to find out...
76388602
76391241
xero developer api not authorizing i generated the access_token from the endpoint postman screen shot for access token when i try to get xero item i am getting screen shot for item end point this endpoint should give the item with identifier 96d14376-4b75-4b4a-8fd3-b1caab075ab3 in the response also when i try this one in xero api explorer after login its working fine
Why am I unable to retrieve a Xero item by identifier with a valid access token from Postman?
Looking at the logs relating to the instance id in the error screen shot, the access token does not include the accounting.settings scope. Please can you go through the OAuth 2.0 process from the very beginning, making sure the scope is in the authorisation call. When you add a new scope to a call you need to go through the whole authorisation process from scratch to update the access token. When you get a new access token you can decode it to check the scopes before you use it to make sure that you have the scopes you need. You can use jwt.io to check this if you wish
76388769
76391245
I have a use case where I have to call two different methods in a reactive pipeline Java 8 on a post-API call. 1st Method: will insert data in a master table and will return the pk of that table insertion. 2nd Method: this method will insert data in the mapping table which will use the pk received from 1st method. I try to do that Mono.zip, but that did not work as Zip is calling both methods simultaneously and is not able to pass method 1 output to 2nd method input.
Call Different Method in reactive pipeline
You can easily do it using the map or flatMap operator, depending on what type of repository you have - reactive or not reactive. Here is examples for both cases: public class YourService { private final YourRepository repository; private final YourReactiveRepository reactiveRepository; void doAction() { Mono.fromCallable(() -> repository.saveToMainTable("main data")) .map(mainTableId -> repository.saveToSecondaryTable(mainTableId, "secondary data")) .subscribeOn(Schedulers.boundedElastic()) .subscribe(); } void doActionWithReactiveRepository() { reactiveRepository.saveToMainTable("main data") .flatMap(mainTableId -> reactiveRepository.saveToSecondaryTable(mainTableId, "secondary data")) .subscribe(); } interface YourRepository { int saveToMainTable(String someData); boolean saveToSecondaryTable(int mainTableId, String someData); } interface YourReactiveRepository { Mono<Integer> saveToMainTable(String someData); Mono<Boolean> saveToSecondaryTable(int mainTableId, String someData); } } You can read more about map here and about flatMap here
76384653
76390867
I need to sort a dataframe based on the order of the second row. For example: import pandas as pd data = {'1a': ['C', 3, 1], '2b': ['B', 2, 3], '3c': ['A', 5, 2]} df = pd.DataFrame(data) df Output: 1a 2b 3c 0 C B A 1 3 2 5 2 1 3 2 Desired output: 3c 2b 1a 0 A B C 1 5 2 3 2 2 3 1 So the columns have been order based on the zero index row, on the A, B, C. Have tried many sorting options without success. Having a quick way to accomplish this would be beneficial, but having granular control to both order the elements and move a specific column to the first position would be even better. For example move "C" to the first column. Something like make a list, sort, move and reorder on list. mylist = ['B', 'A', 'C'] mylist.sort() mylist.insert(0, mylist.pop(mylist.index('C'))) Then sorting the dataframe on ['C', 'A', 'B'] outputting 1a 3c 2b 0 C A B 1 3 5 2 2 1 2 3
Sort pandas dataframe columns on second row order
With the help of pyjedy and Stingher, I was able to resolve this issue. One of the problems was due to my input. The input consisted of lists instead of dictionaries, so I needed to transform it. As a result, I had indexes for rows and across the top for columns. Consequently, selecting elements from the list required obtaining the index. import pandas as pd def search_list_for_pattern(lst, pattern): for idx, item in enumerate(lst): if pattern in item: break return idx data = [['1a', 'B', 2, 3], ['2b', 'C', 3, 1], ['3c', 'A', 5, 2]] df = pd.DataFrame(data).transpose() print(df) # 0 1 2 # 0 1a 2b 3c # 1 B C A # 2 2 3 5 # 3 3 1 2 # Get the second row and convert it to a list second_row = df.iloc[1, :].tolist() print(second_row) # ['B', 'C', 'A'] # Find the index of the column you want to move to the first position target_column = search_list_for_pattern(second_row, "C") print(target_column) # 1 # Sort the list sorted_columns = sorted(range(len(second_row)), key=lambda k: second_row[k]) print(sorted_columns) # [2, 0, 1] # Move the target column to the first position sorted_columns.remove(df.columns.get_loc(target_column)) sorted_columns.insert(0, df.columns.get_loc(target_column)) # Reorder the columns of the DataFrame based on the sorted list df = df.iloc[:, sorted_columns] print(df) # 1 2 0 # 0 2b 3c 1a # 1 C A B # 2 3 5 2 # 3 1 2 3 df.to_excel('ordered.xlsx', sheet_name='Sheet1', index=False, header=False)
76392302
76392348
I'm having a problem to combine list of lists of tuples, and the main problem comes from different sizes of those tuples. I'm also trying to do it "pythonic" way, which isn't very easy. What I actually have is a list of objects, having coordinates given in tuple. Objects (let's say: lines) always have start and end as (x1,y1) and (x2,y2), they also usually have some "path". The problem is that "path" is sometimes empty and in general number of points on the path is different. start=[ (3,5), (23,50), (5,12), (51,33), (43,1)] end = [(23,19), (7,2), (34,4), (8,30), (20,10)] path=[[(10,7),(14,9),(18,15)], [], [(15,7)], [(42,32),(20,31)], [(30,7)]] Expected result should look like this: whole_path = [[(3,5),(10,7),(14,9),(18,15),(23,19)], [(23,50),(7,2)], [(5,12),(15,7),(34,4)], [(51,33),(42,32),(20,31),(8,30)], [(43,1),(30,7),(20,10)]] I was trying to use zip - it works well for similar size items in start/end/paths lists but not with their differences. Promising solutions might come with use path.insert(0,start) and path.extend([end]), but I couldn't make that working, there is also an option to put that into two loops, but it doesn't look well and... it's not very "pythonic". So: any suggestions would be nice.
Combine list of lists of tuples with different lengths
A solution with zip and *-unpacking of the variable-length path element is reasonably clean: from pprint import pprint start=[ (3,5), (23,50), (5,12), (51,33), (43,1)] end = [(23,19), (7,2), (34,4), (8,30), (20,10)] path=[[(10,7),(14,9),(18,15)], [], [(15,7)], [(42,32),(20,31)], [(30,7)]] whole_path = [[s, *p, e] for s, p, e in zip(start, path, end)] pprint(whole_path) giving the required: [[(3, 5), (10, 7), (14, 9), (18, 15), (23, 19)], [(23, 50), (7, 2)], [(5, 12), (15, 7), (34, 4)], [(51, 33), (42, 32), (20, 31), (8, 30)], [(43, 1), (30, 7), (20, 10)]]
76382247
76392356
I'm new to Blade and tried to use anonymous components for sections that I will use frequently. The problem is that when I'm trying to pass down data to the component, it won't show anything. Here is my Code: Controller: public function edit(Workingtime $workingtime) { $user = User::find(Auth::id()); $workingtime_array = $user->getWorkingtimesOfThisDay($workingtime->id); return view('workingtime-edit', compact('workingtime_array', 'workingtime')); } workingtime-edit.blade.php: <p>Overview of Workingtimes</p> @if(count($workingtime_array) > 0) <x-workingtime-timeline :workingtimeArray = "$workingtime_array"/> @endif workingtime-timeline.blade.php (component): @props(['workingtimeArray']) @php if(isset($workingtimeArray) AND count ($workingtimeArray) > 0){ $start = strtotime($workingtimeArray[0]->time_begin); $end = strtotime($workingtimeArray[count($workingtimeArray)-1]->time_end); } @endphp @foreach ($workingtimeArray as $worktime) <p>Start: {{ $worktime->time_begin }} </p> <p>End: {{ $worktime->time_end }} </p> @endforeach; When I don't use variables and @props, it shows me the content of the component. I tried to change the <x-workingtime-timeline :workingtimeArray = "$workingtime_array"/> line but nothing worked. Things I tried: <x-workingtime-timeline :workingtimeArray = "$workingtime_array"> </x-workingtime-timeline> <x-workingtime-timeline workingtimeArray = "$workingtime_array"/>
Blade: anonymous component doesn't show when passing data
In workingtime-edit.blade.php you need to remove the spaces around the = for the attribute you are setting: <x-workingtime-timeline :workingtimeArray="$workingtime_array"/> I think this is a limitation of Laravel's parsing of component attributes, since this is not a general requirement of HTML attributes.
76388523
76391330
I am trying to add a new category to an email item using Exchange WEB Services but when I run the code, existing categories disappear and the category I added becomes the only category. For example an email has categories X and Y and after I add my category Z to this mail item, X and Y disappears and only category for the mail becomes Z Any help is much appreciated. Thanks in advance Please note that I am running this code inside a software called Blue Prism a low code no code scripting software and it is near impossible to implement third party libraries etc due to corporate chain of approvals and stuff When I do it manually on the outlook 2019 it works the way I intended Here is the method I am using The DLL That uses this code is Microsoft.Exchange.WebServices.dll //Here I initialize exchange object and authenticate ExchangeService exchange = null; void ConnectEWS(string ewsURL, string _exchangeVersion, string username, string password) { ServicePointManager.ServerCertificateValidationCallback = delegate {return true;} ; try { exchange = new ExchangeService(); exchange.Url = new Uri(ewsURL); exchange.Credentials = new WebCredentials(username,password); } catch (Exception ex) { throw new Exception("failed to initialize exchange object!!="+ex.Message); } } //here is my void method that adds a new category to an email item void addCategoryToMail(string msgid, string category) { if(exchange==null) { throw new Exception("exchange object is null!!"); } EmailMessage message = EmailMessage.Bind(exchange,msgid, BasePropertySet.IdOnly ); if(message.Categories.Contains(category)==false) { message.Categories.Add(category); //message.Update(ConflictResolutionMode.AlwaysOverwrite); message.Update(ConflictResolutionMode.AutoResolve); } }
How can I prevent existing categories from disappearing when I add a new category to an email item using Exchange Web Services in C#?
The line EmailMessage message = EmailMessage.Bind(exchange,msgid, BasePropertySet.IdOnly ); only requests the message id. You need to request categories. See if BasePropertySet.FirstClassProperties brings categories in. If not, explicitly request categories.
76389988
76390898
I'm building a react app with vite and I'm deploying it with docker. When deploying the container, npm run build runs but does nothing, even doing it inside the container manually won't work. I get this output but the actual build does not happen: $ npm run build > [email protected] build > tsc && vite build $ Thing is if I run tsc && vite build it works just fine. Same goes with npm run dev or any other script. Here's more info: Tried with both Node Alpine and Debian (I'm actively developing in Alpine). Tried creating the Node user and setting permissions accordingly since I read that npm might behave weird if ran by the root user. $ npm config get ignore-scripts false These are my docker files FROM node:20-bullseye WORKDIR /app/frontend/ COPY package.json package-lock.json* /app/frontend/ RUN npm install RUN npm install -g vite RUN npm install -g typescript COPY . . RUN chown -R node:node /app/frontend/ USER node CMD ["npm", "run", "build"] docker-compose.yaml: version: "3.4" name: fisy-prod services: backend: volumes: - static:/static env_file: - ./.env build: context: ./backend dockerfile: Dockerfile.prod ports: - "8000:8000" frontend: build: context: ./frontend dockerfile: Dockerfile.prod volumes: - frontend:/app/frontend/dist nginx: build: context: ./nginx volumes: - static:/static - frontend:/var/www/frontend ports: - "5173:80" depends_on: - backend - frontend volumes: static: frontend: I scouted the web all morning without really finding anything that might point to the issue. Would appreciate it if someone that has experienced this could help.
npm run does not work as expected on a docker container
Your Compose file has volumes: blocks that replace all of the image content with content from named volumes. This apparently works the first time you run the container, since Docker copies content from the image into an empty volume; but if you rebuild your application, the volume will not get updated, and the old content in the volume will replace your image content. One important part of this problem is that your frontend container doesn't really do anything. Its CMD is to build the application, but that will immediately exit. This doesn't need to be a long-running container and it doesn't need to be listed in your Compose file. What I'd do here is to use a multi-stage build to compile the frontend, then COPY all of the content into an Nginx-based image. For example: FROM node:20-bullseye AS frontend WORKDIR /app/frontend/ # Note, run with the project root as the build context COPY frontend/package.json frontend/package-lock.json ./ ... RUN npm run build FROM nginx:1.25 COPY nginx/default.conf.tmpl /etc/nginx/default.conf.tmpl COPY backend/static/ /static/ COPY --from=frontend /app/frontend/dist/ /var/www/frontend/ Now the final image contains all of the pieces you need, so you don't need to try to share files between containers; that means you can remove all of the volumes:. The final image build also encapsulates the front-end build, so you don't need a separate frontend build-only container. This leaves you with a Compose file version: "3.8" services: backend: env_file: - ./.env build: context: ./backend dockerfile: Dockerfile.prod ports: - "8000:8000" nginx: build: context: . # needed to COPY files from subdirectories dockerfile: nginx/Dockerfile ports: - "5173:80" depends_on: - backend
76387981
76391438
My file has a .z.json-extension and can be found here. The content of the file is "7ZQ7a8MwFIX/i2Yn3IekK3nv3EIztCkdQslgSpySuFPwf6+USMZZbiGzFyODPs7RuY+LeTmeu6E79qb9uJhNd9ifh93hx7SGgHgFbkVhg7Z10kJcB+YAIFvTmKd+OHX7s2kvBvPnddgNv+nXPPeb0+7rO115M+1KPGFj3vMJbTptTeuAx8aQAnkrcIMi51OGMENWg4TjDQrRuwJRghBUqTBJoZ9T2qu8g6IVyVOhJFOaw5QF1SwYZlmg18OoWkw4dxgUipArVJSszalrWQgGW/yJDUUJM6VmIVCoCBhmxSK1xN75SoU7inWt0k1RarXoSqmdYf2UYOCi5TMlao1j1UKZ584P9bvVHbrq8NpZk0PnVIdUe5dcHROXKK9nyK7Oyd1wiZqG5ykNmlNBn5PaUSnDML1rHJt/dg3CGoTFW1p2zbJrll2z7JrHd83n+Ac=" Apparently, the most common type of file that contains the .z-file extension is compressed Unix files. How do I translate / uncompress this file to its human-understandable version? I have no additional information.
How to read and translate a filename.z.json file
That is Base-64 encoded raw deflate data. You need to decode the Base-64 to binary, and then use zlib to inflate it. The result is 2278 bytes of json.
76391843
76392387
Take the following series: import pandas as pd s = pd.Series([1, 3, 2, [1, 3, 7, 8], [6, 6, 10, 4], 5]) I want to convert this series into the following array: np.array([ [ 1., 1., 1., 1.], [ 3., 3., 3., 3.], [ 2., 2., 2., 2.], [ 1., 3., 7., 8.], [ 6., 6., 10., 4.], [ 5., 5., 5., 5.] ]) Currently, I am using this logic: import numpy as np import pandas as pd from itertools import zip_longest # Convert series and each element in series into list ls = list(map(lambda v: v if isinstance(v, list) else [v], s.to_list())) # Cast list elements to 2d numpy array with longest list element as column number a = np.array(list(zip_longest(*ls, fillvalue=np.nan))).T # Convert to DataFrame, apply 'ffill' row-wise and re-convert to numpy array a = pd.DataFrame(a).fillna(method="ffill", axis=1).values My solution is not really satisfying me, especially the last line where I convert my array to a DataFrame and then back to an array again. Does anyone know a better alternative? You can assume that all list elements have the same length.
Cast pandas series containing list elements to a 2d numpy array
Assuming all list elements have the same length (as indicated), what about using masks and numpy.repeat? s2 = pd.to_numeric(s, errors='coerce') m = s2.isna() out = np.repeat(s2.to_numpy()[:, None], 4, axis=1) out[m] = np.array(s[m].tolist()) Output: array([[ 1, 1, 1, 1], [ 3, 3, 3, 3], [ 2, 2, 2, 2], [ 1, 3, 7, 8], [ 6, 6, 10, 4], [ 5, 5, 5, 5]])
76385124
76390930
I have converted a sklearn logistic regression model object to an ONNX model object and noticed that ONNX scoring takes significantly longer to score compared to the sklearn.predict() method. I feel like I must be doing something wrong b/c ONNX is billed as an optimized prediction solution. I notice that the difference is more noticeable with larger data sets so I created X_large_dataset as as proxy. from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split import datetime from sklearn.linear_model import LogisticRegression from skl2onnx import convert_sklearn from skl2onnx.common.data_types import FloatTensorType import numpy as np import onnxruntime as rt # create training data iris = load_iris() X, y = iris.data, iris.target X_train, X_test, y_train, y_test = train_test_split(X, y) # fit model to logistic regression clr = LogisticRegression() clr.fit(X_train, y_train) # convert to onnx format initial_type = [('float_input', FloatTensorType([None, 4]))] onx = convert_sklearn(clr, initial_types=initial_type) with open("logreg_iris.onnx", "wb") as f: f.write(onx.SerializeToString()) # create inference session from onnx object sess = rt.InferenceSession( "logreg_iris.onnx", providers=rt.get_available_providers()) input_name = sess.get_inputs()[0].name # create a larger dataset as a proxy for large batch processing X_large_dataset = np.array([[1, 2, 3, 4]]*10_000_000) start = datetime.datetime.now() pred_onx = sess.run(None, {input_name: X_large_dataset.astype(np.float32)})[0] end = datetime.datetime.now() print("onnx scoring time:", end - start) # compare to scoring directly with model object start = datetime.datetime.now() pred_sk = clr.predict(X_large_dataset) end = datetime.datetime.now() print("sklearn scoring time:", end - start) This code snippet on my machine shows that sklearn predict runs in less than a second and ONNX runs in 18 seconds.
ONNX performance compared to sklearn
Simply converting a model to ONNX does not mean that it will automatically have a better performance. During conversion, ONNX tries to optimize the computational graph for example by removing calculations which do not contribute to the output, or by fusing separate layers into a single operator. For a generic neural network consisting of convolution, normalization and nonlinearity layers, these optimizations often result in a higher throughput and better performance. So considering you are exporting just LogisticRegression, most likely both sklearn and the corresponding onnx implementations are already very optimized and the conversion will not lead to any performance gain. As to why the InferenceSession.run is 20x slower than sklearn.predict X_large_dataset is a np.int64 array over 300 MB in size. Casting it with astype when creating the input dictionary inside of run creates a new 150 MB array to which everything is copied. This obviously shouldn't be counted towards the model execution time. onnxruntime has quite a bit of memory management overhead when executing models with dynamic inputs for the first time. Subsequent calls to run with inputs of the same shape should finish a lot faster.
76392330
76392389
I have a Node table with a ParentID column, which refers to a NodeID that is its parent. I want to make sure no Node can refers to itself (i.e. a Node's ParentID cannot its own NodeID), so I tried adding a check constraint CHECK(NodeID != ParentID). However, I got this error: Error Code: 3818. Check constraint 'node_chk_1' cannot refer to an auto-increment column. I also couldn't add the ParentID as a foreign key of Node. Using MySQL, How can I make sure that there are no new records where NodeID = ParentID?
How can I make sure a SQL record's column doesn't refer to its own primary key?
Use a trigger: mysql> create table node ( id int auto_increment primary key, parentid int, foreign key (parentid) references node (id) ); mysql> delimiter ;; mysql> create trigger no_self_ref after insert on node for each row begin if NEW.parentid = NEW.id then signal sqlstate '45000' message_text = 'no self-referencing hierarchies'; end if; end;; mysql> delimiter ; Note that it must be an AFTER trigger, because the auto-increment id has not yet been generated in a BEFORE trigger. Demo: mysql> insert into node values (1, null); Query OK, 1 row affected (0.00 sec) mysql> insert into node values (2, 2); ERROR 1644 (45000): no self-referencing hierarchies mysql> insert into node values (2, 1); Query OK, 1 row affected (0.01 sec) You will also need a similar AFTER UPDATE trigger, if you want to prevent users from updating the row and setting the parentid to the same value as the id in the same row. An alternative solution would be to make the primary key non-auto-increment. You will have to specify every id value in your INSERT statements, instead of letting them be auto-incremented. But this will allow you to use a CHECK constraint.
76380833
76391584
ActivityResultLauncher<PickVisualMediaRequest> pickImage = registerForActivityResult(new ActivityResultContracts.PickVisualMedia(), uri -> { if(uri==null) { //URI always NULL here } else { //Never reached } }); pickImage.launch(new PickVisualMediaRequest.Builder() .setMediaType(ActivityResultContracts.PickVisualMedia.ImageOnly.INSTANCE) .build()); I have tried the same code in a new project and it is returning a valid URI. But PhotoPicker returns a NULL URI in my project. Any idea what could be the issue here?
Android PhotoPicker returns NULL URI
Apparently, PhotoPicker (or ActivityResultLauncher in general) fails if onActivityResult() is also Overridden in the Activity. I removed that and now PhotoPicker is returning a valid URI.
76384047
76390948
I know this question has been asked many times however, none of the solutions have worked for me. I am trying to dockerize my angular app and node js backend using nginx. What I have done is that I have created a docker-compose file. It has three services and nginx. 1: space-frontend 2: space-api 3: mongodb I am calling frontend and backend by their service name in nginx conf file like http://space-frontend:80 and http://space-api:3000 but I am getting a error in logs [emerg] 1#1: host not found in upstream "space-api" in /etc/nginx/nginx.conf:23 and frontend is working fine. I am unable to understand where I am missing something. For reference, My frontend docker file FROM node:16-alpine AS builder WORKDIR /app COPY . . RUN npm i && npm run build --prod FROM nginx:alpine RUN mkdir /app COPY --from=builder /app/dist/Space-Locator/browser /app COPY nginx.conf /etc/nginx/nginx.conf My frontend nginx conf events { worker_connections 1024; } http { include /etc/nginx/mime.types; server { listen 80; server_name localhost; root /app; location / { index index.html; try_files $uri $uri/ /index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } } Backend api docker file FROM node:14-alpine as build-step RUN mkdir -p /usr/app WORKDIR /usr/app COPY package.*json /usr/app/ RUN npm install COPY . /usr/app/ EXPOSE 3000 CMD [ "npm", "start" ] my docker-compose file version: "3.8" services: reverse_proxy: image: nginx:1.17.10 container_name: reverse_proxy depends_on: - space-frontend - space-api - database volumes: - ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf ports: - 80:80 space-frontend: container_name: space-frontend image: space-frontend build: context: ./space-frontend ports: - 4000:80 space-api: container_name: space-api hostname: space-api image: space-api build: context: ./space-api ports: - 3000:3000 links: - database environment: MONGO_INITDB_DATABASE: spaceLocator MONGODB_URI: mongodb://db:27017 depends_on: - database volumes: - ./db-data/mongo/:/data/database networks: - node-network database: container_name: db image: mongo restart: on-failure ports: - 27017:27017 volumes: - ./mongodb:/data/database networks: - node-network volumes: dbdata6: networks: node-network: external: true driver: bridge and my nginx.conf file for reverse proxy events { worker_connections 1024; } http { server { listen 80; server_name 127.0.0.1; root /usr/share/nginx/html; index index.html index.htm; location ~* \.(eot|ttf|woff|woff2)$ { add_header Access-Control-Allow-Origin *; } location / { proxy_pass http://space-frontend:80; proxy_set_header X-Forwarded-For $remote_addr; } location /api { proxy_pass http://space-api:3000; proxy_set_header X-Forwarded-For $remote_addr; } } } Can somebody please point to the direction where I am doing wrong? I have tried adding hostname but they are as same as container-name.
host not found in upstream with nginx docker compose
So guys problem was that we have to run every container on same network. networks: - node-network I had to define this in my every service. Then it ran without any problem. Thank you for everyone who helped :)
76389362
76390959
Can't update username of existing RDS cluster Updating the masterUsername and masterUserPassword in the serverless.yml file for my postgresql database results in the password being updated but not the username. i.e. I can only access the db with the old username and new password. I am using the serverless framework to manage aws assests. It's an RDS cluster. Everything builds successfully and I can access the data (using the old username). Waiting for over 30 minutes doesn't have an effect.
How can I update the master Username and password of and existing AWS RDS cluster using the serverless-framework?
I understand you want to update the MasterUsername property on an existing RDS instance that you created with Serverless Framework. Serverless Framework uses AWS CloudFormation. According to the AWS CF docs, the MasterUsername property cannot be updated on an existing RDS instance. Any attempt will result in a resource replacement. If you still want to update the MasterUsername, your only option is to destroy the RDS instance and create a new one.
76392138
76392427
I would like to open a font file and modify eg the usWeightClass and save it in a different location – using Rust: https://github.com/googlefonts/fontations I am already able to load a file, read data from the font, but I am not able to modify and save it. I expect, that it should work somehow with WriteFont (but I don't know how): https://docs.rs/write-fonts/0.5.0/write_fonts/trait.FontWrite.html# Any help would be much appreciated. Thanks a lot in advance, Olli Cargo.toml [dependencies] write-fonts = "0.5.0" read-fonts = "0.3.0" font-types = "0.1.8" main.rs use read_fonts::{FontRef, TableProvider}; fn main() { pub static FONT_DATA: &[u8] = include_bytes!("/path/to/a/font/somefont.ttf"); let font = FontRef::new(FONT_DATA).unwrap(); let mut os2 = font.os2().expect("missing OS/2 table"); println!("os2.version: {}", os2.version()); println!("os2.us_weight_class: {}", os2.us_weight_class()); let mut name = font.name().expect("missing name table"); for item in name.name_record() { println!("name_id: {:?}", item.name_id); println!("language_id: {:?}", item.language_id); let data = item.string(name.string_data()).unwrap(); println!("String entry: {:?}", data.chars().collect::<String>()); }; }
How do I modify a font and save it with Rust 'write-fonts'?
I haven't worked with this library or with fonts in general yet, but after a little digging in the documentation, this seems to work: use read_fonts::{FontRead, FontRef, TableProvider, TopLevelTable}; use write_fonts::{dump_table, tables::os2::Os2, FontBuilder}; fn main() { { let font_data = std::fs::read("Roboto-Regular.ttf").unwrap(); let font = FontRef::new(&font_data).unwrap(); let os2 = font.os2().expect("missing OS/2 table"); println!("os2.us_weight_class: {}", os2.us_weight_class()); // Create a new font builder let mut builder = FontBuilder::default(); // Iterate over tables and add them to the builder for table in font.table_directory.table_records() { let tag = table.tag(); println!(" Adding table {tag} ..."); let font_data = font .table_data(tag) .expect(&format!("Table {tag} not found!")); let mut raw_data = font_data.as_ref().to_owned(); // Modify the OS2 tag if tag == Os2::TAG { let mut os2 = Os2::read(font_data).unwrap(); os2.us_weight_class = 420; raw_data = dump_table(&os2).unwrap(); } builder.add_table(tag, raw_data); } // Build the font let data = builder.build(); std::fs::write("Roboto-Regular-modified.ttf", data).unwrap(); } { // Load the font again and check if it got modified let font_data = std::fs::read("Roboto-Regular-modified.ttf").unwrap(); let font = FontRef::new(&font_data).unwrap(); let os2 = font.os2().expect("missing OS/2 table"); println!("os2.us_weight_class: {}", os2.us_weight_class()); } } os2.us_weight_class: 400 Adding table GDEF ... Adding table GPOS ... Adding table GSUB ... Adding table OS/2 ... Adding table cmap ... Adding table cvt ... Adding table fpgm ... Adding table gasp ... Adding table glyf ... Adding table hdmx ... Adding table head ... Adding table hhea ... Adding table hmtx ... Adding table loca ... Adding table maxp ... Adding table name ... Adding table post ... Adding table prep ... os2.us_weight_class: 420
76392283
76392432
I see syntax for if pass but not finding syntax for if elif pass: List comprehension with else pass Basically if condition: something elif condition something else pass
List comprehension in Python for if, elif, pass?
Use an if at the end of the list comprehension (a filter) in order to completely exclude an item from the list. For example, if you wanted to do this as a list comprehension: result = [] for i in range(10): if i % 2: result.append("two") elif i % 3: result.append("three") else: pass # note that you do not actually need this "else" at all you could do: result = ["two" if i % 2 else "three" for i in range(10) if i % 2 or i % 3]
76390270
76391014
VS Code error hi-lighting can offer quite a bit of clutter during code edits with its squiggles and colorization. It is a useful horn but once the problem is noticed it can become a nuisance that even makes it difficult to perceive the text and fix the problem. I don't think this feature was intended to get in the way. Is there a convenient way to toggle VSCode error hi-lighting on and off without going through settings.json and playing with a bunch of flags? Conclusion... Feature does not exist. Work around is to install and use the When_File VSCode extension to mask the error noise with less visible colors when the file has unsaved edits, as clarified below.
how do temporarily turn of vs-code error highlighting
you can use the extension When File Add this setting to your workspace settings.json "whenFile.change": { "byLanguageId": { "rust": { "whenDirty": { "editorError.foreground": "#ff000020", "editorWarning.foreground": "#ff000020", "editorInfo.foreground": "#ff000020" } } } }
76388820
76391781
As you can see in C# Unity container, we can configure a container (a list of dependency and object creation policy) from a file, and it is very good when, I want change a dependency (backend for interface) in the runtime environment without need to upload my project again, or in a dockerise paradigm, I do not need to build the Docker image again. At now, I want to migrate to the ASP.NET 6 built-in DI, How can I provide the same functionality in the built-in DI? For example, something like <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Unity.Configuration"/> </configSections> <unity xmlns="http://schemas.microsoft.com/practices/2010/unity"> <alias alias="IProductService" type="UnityExample.Service.IProductService, UnityExample.Service" /> <containers> <container name="Service"> <register type="IProductService" mapTo="ProductService"/> </container> </containers> </unity> </configuration>
How can the ASP.NET built-in DI read dependencies from a file at runtime?
While you could have a hard time making the new service provider compatible with Unity, consider switching to Autofac. Autofac can replace the builtin .NET Core's container and also it supports loading configuration from a file.
76392223
76392509
I have problem working with unicode strings, wchar_t type. In my program I'm getting input as wchar_t and I'm supposed to XOR it and write it to file and read it back and print it to command line. This is my code, const unsigned int XORKey = 0xff; size_t XORit(const wchar_t* value, wchar_t* xorred) { size_t length = wcslen(value); for (int i = 0; i < length; i++) xorred[i] = ((char)(value[i] ^ XORKey)); return length; } int main() { setlocale(LC_ALL, "en_US.UTF-8"); // XOR it wchar_t sample[] = { L"TEST1自己人" }; int samplelen = wcslen(sample); printf("%ls", sample); printf("\n"); printf("Plain:\n\t"); for (int i = 0; i < samplelen; i++) { printf("%02X ", sample[i]); } printf("\n"); wchar_t* xorred = (wchar_t*)malloc(samplelen); if (xorred == NULL) return -1; memset(xorred, 0, samplelen); XORit(sample, xorred); printf("XOR'ed\n\t"); for (int i = 0; i < samplelen; i++) { printf("%02X ", xorred[i]); } printf("\n"); // Write to file FILE* fpW = _wfopen(L"logon.bin", L"wb"); fwrite(xorred, sizeof(wchar_t), samplelen, fpW); fclose(fpW); // Read from file FILE* fpR = fopen("logon.bin", "rb"); fseek(fpR, 0, SEEK_END); int filesize = ftell(fpR); wchar_t* unxorred = (wchar_t*)malloc(filesize + sizeof(wchar_t)); if (unxorred == NULL) return -1; rewind(fpR); fread(unxorred, sizeof(wchar_t), filesize, fpW); fclose(fpW); printf("Reading\n\t"); for (int i = 0; i < samplelen; i++) { printf("%02X ", unxorred[i]); } printf("\n"); printf("Un-XOR'ed\n\t"); for (int i = 0; i < samplelen; i++) { printf("%02X ", unxorred[i] ^ XORKey); } printf("\n"); printf("%ls", unxorred); return 0; } The values I'm reading back from the file doesn't match to what I wrote! :( I'm new to C programming, I did my best to get this right, please forgive any noob mistakes in my understanding of the issue and implementing it. Thanks in advance I fixed the code based on the comments, const unsigned int XORKey = 0xff; size_t XORit(const wchar_t* value, wchar_t* xorred) { size_t length = wcslen(value); for (int i = 0; i < length; i++) xorred[i] = (value[i] ^ XORKey); return length; } int main() { setlocale(LC_ALL, "en_US.UTF-8"); // XOR it wchar_t sample[] = { L"Test1自己人自己人A" }; int samplelen = wcslen(sample); printf("%ls", sample); printf("\n"); printf("Plain:\n\t"); for (int i = 0; i < samplelen; i++) { printf("%0*X ", (int)sizeof(wchar_t), (unsigned int)sample[i]); } printf("\n"); wchar_t* xorred = (wchar_t*)malloc(samplelen); if (xorred == NULL) return -1; memset(xorred, 0, samplelen); XORit(sample, xorred); printf("XOR'ed\n\t"); for (int i = 0; i < samplelen; i++) { printf("%0*X ", (int)sizeof(wchar_t), (unsigned int)xorred[i]); } printf("\n"); // Write to file FILE* fpW = _wfopen(L"logon.bin", L"wb"); fwrite(xorred, sizeof(wchar_t), samplelen, fpW); fclose(fpW); // Read from file FILE* fpR = fopen("logon.bin", "rb"); fseek(fpR, 0, SEEK_END); int filesize = ftell(fpR); int whattoread = (filesize / sizeof(wchar_t)); wchar_t* ReadXOR = (wchar_t*)malloc(filesize + 1); if (ReadXOR == NULL) return -1; memset(ReadXOR, 0, filesize + 1); rewind(fpR); fread(ReadXOR, sizeof(wchar_t), whattoread, fpR); fclose(fpW); printf("Reading\n\t"); for (int i = 0; i < samplelen; i++) { printf("%0*X ", (int)sizeof(wchar_t), (unsigned int)ReadXOR[i]); } printf("\n"); wchar_t* unxorred = (wchar_t*)malloc(whattoread); if (unxorred == NULL) return -1; memset(unxorred, 0, whattoread); printf("Un-XOR'ed\n\t"); for (int i = 0; i < whattoread; i++) { unxorred[i] = ReadXOR[i] ^ 0xff; printf("%0*X ", (int)sizeof(wchar_t), (unsigned int)unxorred[i]); } printf("\n"); printf("%ls\n", unxorred); printf("%ls\n", sample); return 0; The output looks like below, Test1自己人自己人A Plain: 54 65 73 74 31 81EA 5DF1 4EBA 81EA 5DF1 4EBA 41 XOR'ed AB 9A 8C 8B CE 8115 5D0E 4E45 8115 5D0E 4E45 BE Reading AB 9A 8C 8B CE 8115 5D0E 4E45 8115 5D0E 4E45 BE Un-XOR'ed 54 65 73 74 31 81EA 5DF1 4EBA 81EA 5DF1 4EBA 41 Test1自己人自己人A粘蹊?言?萉? Test1自己人自己人A When I modified the unicode string I got wrong text in the output!
How can I XOR wchar_t input, write it to a file, and read it back in C?
Here ... printf("Un-XOR'ed\n\t"); for (int i = 0; i < samplelen; i++) { printf("%02X ", unxorred[i] ^ XORKey); } printf("\n"); ... you print out the decoded values without storing them. When you then ... printf("%ls", unxorred); ... you are printing the data as read back from the file, not the decoded string corresponding to the previously-printed code sequence. Additionally, here ... int filesize = ftell(fpR); wchar_t* unxorred = (wchar_t*)malloc(filesize); if (unxorred == NULL) return -1; rewind(fpR); fread(unxorred, sizeof(wchar_t), filesize, fpW); ... you are attempting to read back sizeof(wchar_t) * filesize bytes from the file, which is more than it actually contains or that you have allocated for (unless sizeof(wchar_t) is 1, which is possible, but unlikely, and is anyway not your case). You do not allocate space for a (wide) string terminator or add one to the read-back data, yet you pass it to printf() is if it were a wide string. This is erroneous. Your approach to printing out the bytes of the wide strings is flawed. The conversion specifier X requires a corresponding unsigned int argument, and wchar_t might neither be the same as unsigned int nor promote to unsigned int via the default argument promotions. Additionally, you get varying-length outputs because your wchar_t is at least 16 bits wide, and your 02 only guarantees 2 hex digits. Better would be, for example: for (int i = 0; i < samplelen; i++) { printf("%0*X ", (int) sizeof(wchar_t), (unsigned int) xorred[i]); } The * for a width says that the minimum field width will be passed as an argument of type int. The casts match the arguments to the types required by the format.
76390597
76391091
I have a few dataframes, let's call them rates, sensors with "session_start", "value_timestamp" (timestamps) and "value" (float) columns. I want to add an "elapsed" column, which I've done successfully using the following code: def add_elapsed_min(df): df["elapsed"] = ( df["value_timestamp"] - df["session_start"].min() ).dt.total_seconds() / 60.0 for df in [rates, sensors]: add_elapsed_min(df) Now, this code does work, and the elapsed column is correct. The minor problem is that I keep getting the SettingWithCopyWarning. I've tried changing the code as suggested by the warning, tried adding a contextlib.suppress, but can't seem to remove this warning. This makes me think I must be breaking some idiomatic way to do this. So I'm wondering: If you want to add a calculated column to many dataframes at once, how are you supposed to do this?
What's the idiomatic way to add a calculated column to multiple data frames?
Although I cannot create your warning in Pandas 1.5.3 and considering using .loc does not suppress the warning for you, one other option is to use df.insert instead. def add_elapsed_min(df): elapsed = (df["value_timestamp"] - df["session_start"].min()) / 60.0 df.insert(df.shape[1], 'elapsed', elapsed) for df in [rates, sensors]: add_elapsed_min(df)
76388425
76392087
I was reading the documentation here https://developer.shopware.com/docs/guides/plugins/plugins/storefront/add-custom-javascript but cannot find a mention on how to make usage of environment variables in a javascript plugin. Currently I tried to put a .env file at the root of my plugin in custom/apps/MyPlugin/.env and to capture them via process.env but it fallbacks to my default values... Is there a way to handle a .env file when you run bash bin/build-storefront.sh? Thanks.
Javascript plugin and environment variables
Here's one way to do it... First create a custom webpack.config.js at src/Resources/app/storefront/build. Also in that build directory run npm install dotenv, as you will need it to parse your .env file. Your webpack.config.js could then look like this: const fs = require('fs'); const dotenv = require(`${__dirname}/node_modules/dotenv/lib/main.js`); module.exports = () => { // given you `.env` is located directly in your plugin root dir const contents = fs.readFileSync(`${__dirname}/../../../../../.env`); const config = dotenv.parse(contents); return { externals: { myPluginEnv: JSON.stringify(config), } }; }; Then inside your sources you can import myPluginEnv. import * as myPluginEnv from 'myPluginEnv'; /* * Example .env content: * FOO=1 * BAR=1 */ console.log(myPluginEnv); // {FOO: '1', BAR: '1'}
76388652
76392101
I have an interface on PyQt5, in which, by pressing the Start button, a graph is built, which I made using PyQtGraph. Three lines are drawn on the chart. Green and blue have a y-axis range of 0 to 200, while red has a range of 0 to 0.5. How can I make different scales for different lines, as well as designate two value scales on the Y-axis - from 0 to 200 and from 0 to 0.5? from pyqtgraph import PlotWidget import pyqtgraph from PyQt5 import QtCore from PyQt5.QtCore import Qt, QThread, QTimer, QObject, pyqtSignal, QTimer from PyQt5.QtWidgets import QHBoxLayout, QMainWindow, QPushButton, QVBoxLayout, QWidget, QApplication import sys import random import numpy as np def get_kl_test(): choices = [50, 50, 50, 51, 51, 51, 52, 52, 52] list = [random.choice(choices) for i in range(11)] return list def get_iopd_test(): choices = [0.05, 0.1, 0.15, 0.2, 0.25, 0.3] return random.choice(choices) class Graph(PlotWidget): def __init__(self): super().__init__() self.setBackground('white') self.addLegend() self.showGrid(x=True, y=True) self.setYRange(0, 255, padding=0) class ReadingWorker(QObject): update_graph = pyqtSignal(list, list, list, list) def __init__(self): super().__init__() self.time_from_start = 0 self.time_values = [] self.green_values = [] self.blue_values = [] self.red_values = [] def run(self): self.read() self.update_time() def read(self): ipd_values = get_kl_test() iopd_value = get_iopd_test() self.green_values.append(ipd_values[0]) self.blue_values.append(ipd_values[1]) self.red_values.append(iopd_value) self.time_values.append(self.time_from_start) self.update_graph.emit( self.green_values, self.blue_values, self.red_values, self.time_values) QTimer.singleShot(1000, self.read) def update_time(self): self.time_from_start += 1 QTimer.singleShot(1000, self.update_time) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.central_widget = QWidget(self) self.setGeometry(50, 50, 1300, 700) self.setCentralWidget(self.central_widget) self.layout_main_window = QVBoxLayout() self.central_widget.setLayout(self.layout_main_window) self.layout_toolbar = QHBoxLayout() self.layout_toolbar.addStretch(1) self.btn_start = QPushButton("Старт") self.btn_start.clicked.connect(self.start) self.layout_toolbar.addWidget(self.btn_start) self.layout_main_window.addLayout(self.layout_toolbar) self.graph = Graph() self.layout_main_window.addWidget(self.graph) self.setup_graphs() self.window_size = 50 def start(self): self.reading_thread = QThread(parent=self) self.reading_widget = ReadingWorker() self.reading_widget.moveToThread(self.reading_thread) self.reading_widget.update_graph.connect(self.draw_graph) self.reading_thread.started.connect(self.reading_widget.run) self.reading_thread.start() def setup_graphs(self): pen_ipd_1 = pyqtgraph.mkPen(color='green', width=4) pen_ipd_2 = pyqtgraph.mkPen(color='blue', width=4, style=Qt.DashDotLine) pen_iopd = pyqtgraph.mkPen(color='red', width=4, style=Qt.DashLine) self.line_ipd_1 = pyqtgraph.PlotCurveItem([], [], pen=pen_ipd_1, name='1') self.line_ipd_2 = pyqtgraph.PlotCurveItem([], [], pen=pen_ipd_2, name='2') self.line_iopd = pyqtgraph.PlotCurveItem([], [], pen=pen_iopd, name='3') self.graph.plotItem.addItem(self.line_ipd_1) self.graph.plotItem.addItem(self.line_ipd_2) self.graph.plotItem.addItem(self.line_iopd) @QtCore.pyqtSlot(list, list, list, list) def draw_graph(self, ipd_1_values, ipd_2_values, iopd_values, time_values): x, y = self.line_ipd_1.getData() x = np.append(x, time_values[-1]) self.line_ipd_1.setData(y=np.append(y, ipd_1_values[-1]), x=x) _, y = self.line_ipd_2.getData() self.line_ipd_2.setData(y=np.append(y, ipd_2_values[-1]), x=x) _, y = self.line_iopd.getData() self.line_iopd.setData(y=np.append(y, iopd_values[-1]), x=x) if (len(x) > 0 and x[-1] -x [0] > self.window_size): self.graph.plotItem.setXRange(x[-1]-self.window_size, x[-1]) if __name__ == '__main__': app = QApplication(sys.argv) app.setStyle('Fusion') main_window = MainWindow() main_window.show() sys.exit(app.exec_())
Different scales for PyQtGraph chart axis in PyQt5
Check out the MultiplePlotAxes.py example. To add another axis on the right change the setup_graphs function and add update_views: def setup_graphs(self): pen_ipd_1 = pyqtgraph.mkPen(color='green', width=4) pen_ipd_2 = pyqtgraph.mkPen(color='blue', width=4, style=Qt.DashDotLine) pen_iopd = pyqtgraph.mkPen(color='red', width=4, style=Qt.DashLine) self.line_ipd_1 = pyqtgraph.PlotCurveItem([], [], pen=pen_ipd_1, name='1') self.line_ipd_2 = pyqtgraph.PlotCurveItem([], [], pen=pen_ipd_2, name='2') self.line_iopd = pyqtgraph.PlotCurveItem([], [], pen=pen_iopd, name='3') self.graph.plotItem.addItem(self.line_ipd_1) self.graph.plotItem.addItem(self.line_ipd_2) self.vb = pyqtgraph.ViewBox() self.pi = self.graph.plotItem self.pi.showAxis('right') self.pi.scene().addItem(self.vb) self.pi.getAxis('right').linkToView(self.vb) self.vb.setXLink(self.pi) self.update_views() self.pi.vb.sigResized.connect(self.update_views) self.vb.addItem(self.line_iopd) self.pi.setYRange(0,200) self.vb.setYRange(0,0.5) self.graph.plotItem.legend.addItem(self.line_iopd, self.line_iopd.name()) def update_views(self): self.vb.setGeometry(self.pi.vb.sceneBoundingRect()) self.vb.linkedViewChanged(self.pi.vb, self.vb.XAxis) Result: Edit to simulatiously scale both y-axes (there might also be something build in, padding is really annoing here): def setup_graphs(self): pen_ipd_1 = pyqtgraph.mkPen(color='green', width=4) pen_ipd_2 = pyqtgraph.mkPen(color='blue', width=4, style=Qt.DashDotLine) pen_iopd = pyqtgraph.mkPen(color='red', width=4, style=Qt.DashLine) self.line_ipd_1 = pyqtgraph.PlotCurveItem([], [], pen=pen_ipd_1, name='1') self.line_ipd_2 = pyqtgraph.PlotCurveItem([], [], pen=pen_ipd_2, name='2') self.line_iopd = pyqtgraph.PlotCurveItem([], [], pen=pen_iopd, name='3') self.graph.plotItem.addItem(self.line_ipd_1) self.graph.plotItem.addItem(self.line_ipd_2) self.vb = pyqtgraph.ViewBox() self.pi = self.graph.plotItem self.pi.showAxis('right') self.pi.scene().addItem(self.vb) self.pi.getAxis('right').linkToView(self.vb) self.vb.setXLink(self.pi) self.update_views() self.pi.vb.sigResized.connect(self.update_views) self.vb.addItem(self.line_iopd) self.pi.setYRange(0,255, padding=0) self.vb.setYRange(0,0.5, padding=0) self.align = None self.update_secondary() self.pi.vb.sigYRangeChanged.connect(self.update_secondary) self.graph.plotItem.legend.addItem(self.line_iopd, self.line_iopd.name()) def update_views(self): self.vb.setGeometry(self.pi.vb.sceneBoundingRect()) self.vb.linkedViewChanged(self.pi.vb, self.vb.XAxis) def update_secondary(self): if self.align is None: self.align = [self.pi.getAxis('left').range, self.pi.getAxis('right').range] factor = (self.align[1][1]-self.align[1][0])/(self.align[0][1]-self.align[0][0]) newRangeLeft = self.pi.getAxis('left').range newRangeRightMin = self.align[1][0]-(self.align[0][0]-newRangeLeft[0])*factor newRangeRightMax = self.align[1][1]+(newRangeLeft[1]-self.align[0][1])*factor self.vb.setYRange(newRangeRightMin, newRangeRightMax, padding=0)
76389456
76391095
I have a dictionary where jumps happen in its keys. How can I find the key in between each group where the value is minimum? For example, I have myDict = { 0.98:0.001, 1.0:0.002, 1.02: 0.0001, 3.52:0.01, 3.57:0.004, 3.98: 0.005, 4.01: 0.02, 6.87: 0.01, 6.90:0.02, 6.98:0.001, 7.0: 0.02 } My desired output would be 1.02, 3.57, 6.98. The actual dictionary I'm working with has over 1000 items.
Finding keys with the minimum values in each group in a dictionary with jumps (Python)
Here is a solution, supposing the dictionary is sorted in ascending order according to key (explanations in the comments of the code): def main(): d = { 0.98: 0.001, 1.0: 0.002, 1.02: 0.0001, 3.52: 0.01, 3.57: 0.004, 3.98: 0.005, 4.01: 0.02, 6.87: 0.01, 6.90: 0.02, 6.98: 0.001, 7.0: 0.02 } all_groups = [] # list to store the groups minimums = [] # list to store all mins # initializing holders for minimum key and value min_k = 1000 min_v = 0 for k, v in d.items(): # an if-statement just to add the first group with key inside if len(all_groups) == 0: all_groups.append([k]) min_k = k min_v = d.get(k) else: # check if the difference is less or equal to 1 if k - all_groups[-1][-1] <= 1.0: all_groups[-1].append(k) # each time we add a key to a group, we check if it is the minimum if d.get(k) < min_v: min_k = k min_v = d.get(k) else: minimums.append((min_k, min_v)) # we append a new list with the new key inside to `all_groups` # in which we will store the next elements all_groups.append([k]) min_k = k min_v = d.get(k) minimums.append((min_k, min_v)) # adding last minimums because for loop ends without adding them for i in minimums: print(i[0]) # 1.02, 3.57, 6.98 if __name__ == "__main__": main()
76391230
76392528
Consider the following quarto document: --- title: "Untitled" format: pdf --- ```{python} #|echo: false #|result: 'asis' import pandas as pd df = pd.DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'], 'B': ['one', 'one', 'two', 'two', 'one', 'one'], 'C': ['dull', 'dull', 'shiny', 'shiny', 'dull', 'dull'], 'D': [1, 3, 2, 5, 4, 1]}) print(df) ``` How to scale down the output of the python chunk, say, to 50%. Is that possible?
Change the font size of the output of python code chunk
I can suggest two approaches to do this, use whatever suits you! Option 01 Code chunk outputs are wrapped inside the verbatim environment. So to change the font size for a single code chunk, one option could be redefining the verbatim environment to have a smaller font size just before that code chunk and then again redefining the verbatim environment to get the default font size for later code chunk outputs. --- title: "Untitled" format: pdf --- \let\oldvrbtm\verbatim \let\endoldvrbtm\endverbatim <!-- % redefine the verbatim environment --> \renewenvironment{verbatim}{\tiny\oldvrbtm}{\endoldvrbtm} ```{python} #|echo: false #|result: 'asis' import pandas as pd df = pd.DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'], 'B': ['one', 'one', 'two', 'two', 'one', 'one'], 'C': ['dull', 'dull', 'shiny', 'shiny', 'dull', 'dull'], 'D': [1, 3, 2, 5, 4, 1]}) print(df) ``` <!-- % redefine the environment back to normal --> \renewenvironment{verbatim}{\oldvrbtm}{\endoldvrbtm} ```{python} #|echo: false #|result: 'asis' print(df) ``` Option 02 This idea actually is actually taken from this answer on TeXStackExchange. Here A command is defined to control the verbatim font size. So change the font sizes as needed. --- title: "Untitled" format: pdf include-in-header: text: | \makeatletter \newcommand{\verbatimfont}[1]{\renewcommand{\verbatim@font}{\ttfamily#1}} \makeatother --- \verbatimfont{\tiny} ```{python} #|echo: false #|result: 'asis' import pandas as pd df = pd.DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'], 'B': ['one', 'one', 'two', 'two', 'one', 'one'], 'C': ['dull', 'dull', 'shiny', 'shiny', 'dull', 'dull'], 'D': [1, 3, 2, 5, 4, 1]}) print(df) ``` \verbatimfont{\normalsize} ```{python} #|echo: false #|result: 'asis' print(df) ``` Note: The predefined font sizes that you can use in both of the above options are, \Huge, \huge, \LARGE, \Large, \large, \normalsize, \small, \footnotesize, \scriptsize, \tiny
76387093
76392720
Can't update variables inside data in Vue I'm creating a project using Vue 3 as frontend. In Dashboard.vue, a GET request will be sent to backend with token in header, in order to let the backend identify user's identity, then Vue will receive the response with a json including info like this: {'uid': 'xxx', 'username': 'xxx'}. My Dashboard.vue: <script> import axios from 'axios'; export default { data() { return { uid: '', username: '', temp: {}, loaded: false } }, methods: { require_get(url) { var token = localStorage.getItem('token'); var config = { headers: { 'token': token, } }; var _url = 'user/dashboard/' + url; axios.get(_url, config) .then(response => { this.temp = response.data; }) }, get_user_info() { this.require_get('info'); this.uid = this.temp['username']; this.username = this.temp['username']; } }, mounted() { this.get_user_info(); } } </script> In this way, uid and username cannot be updated correctly. For debugging, when I add console.log(this.uid) at the end of get_user_info() like this: //... this.require_get('info'); this.uid = this.temp['username']; this.username = this.temp['username']; console.log(this.temp['uid']); I get a undefined. But when I add console.log(this.uid) at the end of require.get() like this: //... .then(response => { this.temp = response.data; console.log(this.temp['uid]); }) The output shows that variable uid has already been updated at this moment. After testing, I found that I can correctly update uid and username as long as I put this.uid = this.temp['username']; this.username = this.temp['username']; inside require_get(). Why is that? And how can I manage to update these variables with these two codes staying in get_user_info()? Update I changed my codes into" methods: { async require_get(url) { var token = localStorage.getItem('token'); var config = { headers: { 'token': token, } }; var _url = 'user/dashboard/' + url; axios.get(_url, config) .then(response => { return response.data; }) }, async get_user_info() { let info = await this.require_get('info'); console.log(info); this.uid = info['username']; this.username = info['username']; } }, mounted() { this.get_user_info('info'); } and the output of console.log(info) is still undefined, now I don't understand...
Why does Vue.js not update variables inside data
The issue is asynchronous execution. In require_get, you update this.temp in the .then() callback, which is only called when the Promise has resolved. In get_user_info, you are calling require_get and then immediately (synchronously) trying to read the data. Because it is fetched and set asynchronously, it is not yet present and you get undefined. To fix it, make require_get an async function and return the data instead of using this.temp. Then make get_user_info an async function as well, await the call of require_get, and assign this.uid and this.username to the returned data. You could also use .then() if you prefer that to async/await. I have included examples for both. I assume that this.uid = data['username'] was meant to be this.uid = data['uid'], so changed that too. With async/await import axios from 'axios'; export default { data() { return { uid: '', username: '', loaded: false } }, methods: { async require_get(url) { var token = localStorage.getItem('token'); var config = { headers: { 'token': token, } }; var _url = 'user/dashboard/' + url; var response = await axios.get(_url, config); return response.data; }, async get_user_info() { var data = await this.require_get('info'); this.uid = data['uid']; this.username = data['username']; } }, mounted() { this.get_user_info(); } } With .then() import axios from 'axios'; export default { data() { return { uid: '', username: '', loaded: false } }, methods: { require_get(url) { var token = localStorage.getItem('token'); var config = { headers: { 'token': token, } }; var _url = 'user/dashboard/' + url; return axios.get(_url, config).then((response) => response.data); }, get_user_info() { this.require_get('info').then((data) => { this.uid = data['uid']; this.username = data['username']; }); } }, mounted() { this.get_user_info(); } }
76387206
76392909
Is it possible to resolve SASS contained in a workspace library using an approach that is similar to resolving ts files from an application within the same workspace? For context I'll setup a real workspace as follows: ng new theme-workspace --create-application=false cd theme-workspace ng g library theme mkdir projects/theme/src/lib/styles touch projects/theme/src/lib/styles/index.scss ng g application playground Within the directory projects/theme/src/lib/styles we will add the following content to index.scss. $color: red; And in order to include the style assets we need to update ng-package.json with an asset block like this: "assets": [ { "input": "src/lib/styles", "glob": "**/*.scss", "output": "styles" } ] If we build this project library like this: ng build theme We see that dist/theme/styles contains index.scss. We can access the generated ts component ThemeComponent like this from the playground. import { ThemeComponent } from 'theme'; When using @use to import the SASS index.scss module is it possible to use a similar namespace? For example if we try this from the playground styles.scss module it fails: @use `theme/styles` as t; This is the error. SassError: Can't find stylesheet to import. ╷ 2 │ @use 'theme/styles' as t; Now we could resolve by using a relative file import, but I'm curious whether there's a "Shorthand" way of doing it that uses the library name space?
Resolving a library projects SASS files from an application in the same workspace?
Currently this is not supported, but there is a feature request for it.
76389947
76391113
I'm using the plugin avenc_jpeg2000, from gst-libav module, combined with videotestsrc and filesink plugins for encoding a raw picture to a JPEG2000 picture: gst-launch-1.0 videotestsrc num-buffers=1 ! avenc_jpeg2000 ! filesink location=/tmp/picture-ref.jp2 This pipeline works and produce a 31.85 KiB (32,616) file. enter image description here Now, I want to divide the size of my output file by two by increasing the compression ratio of the encoder avenc_jpeg2000. To achieve this, I want to minimize the number of bits required to represent the image with an allowable level of distortion. I know JPEG2000 standard support lossless and lossy compression mode. For my use case, the lossy compression mode is acceptable. How should I proceed to increase the compression of my output file ? What encoder's properties should I play with for doing that ? My test configuration: i.MX 8M Plus GStreamer 1.18.0 libav 1.18.0 (Release date: 2020-09-08) I tried to play with "bitrate" and "bitrate-tolerance" properties, but it seems to have no effect on the size of the output file: gst-launch-1.0 videotestsrc num-buffers=1 ! avenc_jpeg2000 bitrate=100000 bitrate-tolerance=10000 ! filesink location=/tmp/picture-test-01.jp2 I compare files by doing a checksum with sha224sum command : d0da9118a9c93a0420d6d62f104e0d99fe6e50cda5e87a46cef126f9 /tmp/picture-ref.jp2 d0da9118a9c93a0420d6d62f104e0d99fe6e50cda5e87a46cef126f9 /tmp/picture-test-01.jp2
How to increase the compression ratio of a JPEG2000 file with "avenc_jpeg2000" GStreamer encoder?
For lossy compression, you can increase the value of the quantization. First, set the encoding type of the encoder to "Constant Quantizer" and then, find an appropriated quantizer value. In my case, to produce a 15 KiB file, I used the following pipeline: gst-launch-1.0 videotestsrc num-buffers=1 ! avenc_jpeg2000 pass=2 quantizer=10 ! filesink location=/tmp/picture-test-02.jp2
76392182
76392541
I am creating a Composable which is responsible for showing notifications to users. Every time the user goes to that Composable, I want to execute a query which will clear the notification count. I only want to execute that query when the Composable has appeared, not every time the Composable is recomposed due to configuration change and anything. Essentially I am looking for an equivalent of https://developer.apple.com/documentation/swiftui/view/onappear(perform:). Is there any method I can use in Jetpack Compose?
How to execute code every time a Composable is shown (and execute only once)
Use LaunchedEffect (link) LaunchedEffect(Unit) { // Actions to perform when LaunchedEffect enters the Composition } It takes one or more key parameters that are used to cancel the running effect and start a new one. Since you need to execute your code only once use something immutable as a key like Unit or true.
76387490
76393307
I've tried to create a simple web page that uses the Bootstrap (version 5.3.0) Collapsible and I cannot get it to work no matter what I try. All I want is a Collapsible that 'holds' a few links in it which are shown when you click it. But when that didn't work I simplified it to just a list of strings (code shown below) but the problem is still there. The problem is that the Collapsible button does not show no matter what I try. The <span class="navbar-toggler-icon"></span> is not visible. I've added 'X' to both sides of it just so there is an indication where that invisible button is. Anyone knows how to fix that, make the Collapsible button visible? Thanks. Here's the code: <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css"> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></script> <header class="bg-primary text-white text-center py-3"> <div class="container"> <div class="row"> <div class="col-md-4"> <div class="left-header"> <h3>Left Header</h3> </div> </div> <div class="col-md-4"> <div class="central-header"> <h3>Central Header</h3> </div> </div> <div class="col-md-4"> <div class="right-header"> <button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#collapsibleNav"> X<span class="navbar-toggler-icon"></span>X </button> </div> <div class="collapse" id="collapsibleNav"> <div class="card card-body"> <ul class="nav"> <li class="nav-item">Item 1</li> <li class="nav-item">Item 2</li> </ul> </div> </div> </div> </div> </div> </header> <main class="container mt-4"> <h2>Main Content</h2> <p>This is the main content of the page.</p> </main>
What needs to be done to make the Bootstrap 5.3.0 Collapsible button show in this simple web page?
Try adding .navbar selector to the parent container: right-header navbar, because it's actually coming from that class (--bs-navbar-toggler-icon-bg), not the icon, the hamburger is actually a variable defined in .navbar, so by adding it to the parent it becomes accessible: .navbar-toggler-icon { background-image: var(--bs-navbar-toggler-icon-bg); } .navbar { --bs-navbar-toggler-icon-bg: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%2833, 37, 41, 0.75%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e"); } <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css"> <script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js"></script> <header class="bg-primary text-white text-center py-3"> <div class="container"> <div class="row"> <div class="col-md-4"> <div class="left-header"> <h3>Left Header</h3> </div> </div> <div class="col-md-4"> <div class="central-header"> <h3>Central Header</h3> </div> </div> <div class="col-md-4"> <div class="right-header navbar"> <button class="navbar-toggler mx-auto" type="button" data-bs-toggle="collapse" data-bs-target="#collapsibleNav"> X<span class="navbar-toggler-icon"></span>X </button> </div> <div class="collapse" id="collapsibleNav"> <div class="card"> <ul class="nav card-body"> <li class="nav-item">Item 1</li> <li class="nav-item">Item 2</li> </ul> </div> </div> </div> </div> </div> </header> <main class="container mt-4"> <h2>Main Content</h2> <p>This is the main content of the page.</p> </main>
76389994
76391123
I have a Blazor server app (.Net6) with windows authentication on a Win 2019 Server with IIS. But the username and password dialog is just coming when I open the web page from the google chrome browser directly on the server where the Blazor app is running. When I try to open the page from another PC in the same domain (Chrome, Firefox...), then the page is opening directly without asking username and password. What could be the reason for that?
I have a Blazor server app with windows authentication. But the username and password dialog is just coming when page is opened from the server side
The reason you are not prompted for a username and password when accessing the Blazor server app from another PC in the same domain is likely due to the integrated Windows authentication and the Single Sign-On (SSO) capabilities of the browser and the server. Here's a brief explanation of how this works: Integrated Windows Authentication: When using integrated Windows authentication, the browser automatically sends the user's Windows credentials (username and password) to the server without prompting the user for them. This is possible because the browser and the server negotiate the authentication process using various protocols like Kerberos or NTLM. Single Sign-On (SSO): SSO allows users to authenticate once and then access multiple resources without being prompted for credentials again. In your case, since the client PC and the server are in the same domain, and integrated Windows authentication is enabled, the user's credentials from the client PC's Windows session are automatically passed to the server without requiring a separate prompt. To ensure that the username and password prompt appears consistently, you can try the following: Check Browser Settings: Make sure that the browser settings on the client PC are configured to send Windows credentials. In Chrome, go to Settings > Privacy and Security > Site Settings > Additional permissions > Manage permissions. Ensure that the server's URL is listed and set to "Allow" or "Automatic." Verify IIS Settings: Ensure that the IIS configuration on the server is set up correctly for integrated Windows authentication. Open IIS Manager, select the Blazor server app, and go to the Authentication settings. Make sure that only Windows Authentication is enabled, and other authentication methods like Anonymous Authentication are disabled. Cross-Domain Considerations: If the Blazor server app is hosted on a different domain, you may encounter additional challenges with SSO due to browser security restrictions. In such cases, you may need to configure your server and browser settings to enable cross-domain SSO. Clear Browser Cache: Clear the cache and cookies in the browser on the client PC to ensure that any cached credentials or settings are not causing unexpected behavior. By checking these settings and ensuring that integrated Windows authentication is correctly configured on the server and the client PC, you should be able to consistently prompt for a username and password when accessing the Blazor server app.
76392367
76392557
When using the package googlesheets4 is there any method for writing data to a sheet skipping the first row so that the data being written to a sheet starts at row 2? I am hoping to leverage something similar to when you read a sheet and utilize ex. skip = 2 to read data starting at the 3rd row I have tried the following which does not work write_sheet(data = df, ss = "google_sheet_url", skip = 1, sheet = "test")
Write to Google sheet skipping 1st row
skip=n is only for read_excel() in the googlesheets4 library to start at a specified range, you'd use range_write(). This is similar to the startRow=n in the xlsx library. range_write(ss = "google_sheet_url", data = df, range = "B1", sheet = "test")
76382024
76393367
I am trying to use a "freely" created string as a key for an object. interface CatState { name: string, selected: boolean, color: string, height: number } interface DogState{ name: string, selected: boolean, race: string, age: number, eyeColor: string } export interface Animals { animals: { cat: { cats: CatState[], allSelected: boolean, }, dog: { dogs: DogState[], allSelected: boolean, }, } }; const selectAnimal = (allAnimals: Animals, animal: keyof Animals['animals'], index:number) => { const animalPlural = `${animal}s` as keyof Animals['animals'][typeof animal] allAnimals.animals[animal][animalPlural][index].selected= true } This highlights .selected with the message Property 'selected' does not exist on type 'boolean'. Here is a Playground. Is there a workaround for this, or is this simply not possible?
TypeScript is unable to infer type properly from a typed string variable
In order for this to work you need to make selectAnimal generic. You might think it should be able to deal with an animal input of a union type, but the compiler isn't able to properly type check a single block of code that uses multiple expressions that depend on the same union type. It loses track of the correlation between `${animal}s` and allAnimals.animals[animal]. The formers is of type "cats" | "dogs" and the latter is of a type like {cats: CatState[]} | {dogs: DogState[]}, and you can't generally index into the latter with the former, because "what if you've got {cats: CatState[]} and you're indexing with "dogs"?" That can't happen, but the compiler is unable to see it. TypeScript can't directly deal with correlated unions this way. That's the subject of microsoft/TypeScript#30581. If you want a single code block to work for multiple cases, the types need to be refactored to use generics instead, as described in microsoft/TypeScript#47109. Here's how it might look for your example: interface AnimalStateMap { cat: CatState, dog: DogState } type AnimalData<K extends keyof AnimalStateMap> = { [P in `${K}s`]: AnimalStateMap[K][] & { allSelected: boolean } } export interface Animals { animals: { [K in keyof AnimalStateMap]: AnimalData<K> }; }; const selectAnimal = <K extends keyof AnimalStateMap>( allAnimals: Animals, animal: K, index: number) => { const animalPlural = `${animal}s` as const; // const animalPlural: `${K}s` const animalData: AnimalData<K> = allAnimals.animals[animal] animalData[animalPlural][index].selected = true; } The AnimalStateMap is a basic key-value type representing the underlying relationship in your data structure. Then AnimalData<K> is a mapped type that encodes as a template literal type the concatenation of s onto the type of the keys (giving such plurals as gooses and fishs 🤷‍♂️) and that the value type is of the expected animal array. And that there's an allSelected property. Then your Animals type explicitly written as a mapped type over keyof AnimalStateMap, which will help the compiler see the correlation when we index into it. Finally, selectAnimal is generic in K extends keyof AnimalStateMap and the body type checks because animalPlural is of just the right generic type `${K}s` which is known to be a key of animalData, which is AnimalData<K>. Playground link to code
76390577
76391128
I'm deploying a new kubernetes cluster on a single node (Ubuntu 22.04) The problem is I frequently get this error when running any kubectl commands (hostnames changed) The connection to the server k8cluster.example.com:6443 was refused - did you specify the right host or port? After I installed kubernetes (via apt install -y kubelet kubeadm kubectl) everything was stable, but obviously the node was not in a ready state. The problems started as soon as i deployed the Flannel container network, which I did as follows: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml Pods in the kube-system name space are frequently restarting root@k8cluster:~/.ssh# kubectl get all -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel pod/kube-flannel-ds-6h6zq 1/1 Running 25 (46s ago) 98m kube-system pod/coredns-5d78c9869d-gmdpv 0/1 CrashLoopBackOff 18 (4m40s ago) 130m kube-system pod/coredns-5d78c9869d-zhvxk 1/1 Running 19 (14m ago) 130m kube-system pod/etcd-k8cluster.example.com 1/1 Running 31 (7m21s ago) 130m kube-system pod/kube-apiserver-k8cluster.example.com 1/1 Running 37 (5m40s ago) 131m kube-system pod/kube-controller-manager-k8cluster.example.com 0/1 Running 46 (5m10s ago) 130m kube-system pod/kube-proxy-nvnkf 0/1 CrashLoopBackOff 41 (100s ago) 130m kube-system pod/kube-scheduler-k8cluster.example.com 0/1 CrashLoopBackOff 44 (4m43s ago) 129m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 132m kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 132m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-flannel daemonset.apps/kube-flannel-ds 1 1 1 1 1 <none> 98m kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 132m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 2/2 2 2 132m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-5d78c9869d 2 2 2 130m I'm seeing these errors when running journalctl -u kubelet Jun 02 13:16:21 k8cluster.example.com kubelet[19340]: I0602 13:16:21.848785 19340 scope.go:115] "RemoveContainer" containerID="4da5cc966a4dcf61001cbdbad36c47917fdfeb05bd7c4c985b2f362efa92f464" Jun 02 13:16:21 k8cluster.example.com kubelet[19340]: I0602 13:16:21.849006 19340 status_manager.go:809] "Failed to get status for pod" podUID=aae126ec9b57a8789f7682f92e81bd7a pod="kube-system/etcd-k8cluster.example.com" err="Get \"https://k8cluster.example.com:6443/api/v1/namespaces/kube-system/pods/etcd-k8cluster.example.com\": dial tcp 172.31.37.108:6443: connect: connection refused" Jun 02 13:16:21 k8cluster.example.com kubelet[19340]: E0602 13:16:21.849262 19340 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-spotcluster.infdev.org_kube-system(ccdffaba21456689fa71a8f7b182fb0c)\"" pod="kube-system/kube-apiserver-k8cluster.example.com" podUID=ccdffaba21456689fa71a8f7b182fb0c Jun 02 13:16:21 k8cluster.example.com kubelet[19340]: I0602 13:16:21.849317 19340 status_manager.go:809] "Failed to get status for pod" podUID=ccdffaba21456689fa71a8f7b182fb0c pod="kube-system/kube-apiserver-k8cluster.example.com" err="Get \"https://k8cluster.example.com:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-k8cluster.example.com\": dial tcp 172.31.37.108:6443: connect: connection refused" Jun 02 13:16:21 k8cluster.example.com kubelet[19340]: I0602 13:16:21.866932 19340 scope.go:115] "RemoveContainer" containerID="46f9e127efbd2506f390486c2590232e76b0617561c7c440d94c470a4164448f" Jun 02 13:16:21 k8cluster.example.com kubelet[19340]: E0602 13:16:21.867259 19340 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=coredns pod=coredns-5d78c9869d-gmdpv_kube-system(ddf0658a-260b-41d1-a0a0-595de4991ec6)\"" pod="kube-system/coredns-5d78c9869d-gmdpv" podUID=ddf0658a-260b-41d1-a0a0-595de4991ec6 Jun 02 13:16:22 k8cluster.example.com kubelet[19340]: I0602 13:16:22.850577 19340 scope.go:115] "RemoveContainer" containerID="4da5cc966a4dcf61001cbdbad36c47917fdfeb05bd7c4c985b2f362efa92f464" Also dmesg is showing these messages: [Fri Jun 2 13:02:11 2023] IPv6: ADDRCONF(NETDEV_CHANGE): veth11eea1b5: link becomes ready [Fri Jun 2 13:02:11 2023] cni0: port 1(veth11eea1b5) entered blocking state [Fri Jun 2 13:02:11 2023] cni0: port 1(veth11eea1b5) entered forwarding state [Fri Jun 2 13:11:54 2023] cni0: port 2(veth92694dfb) entered disabled state [Fri Jun 2 13:11:54 2023] device veth92694dfb left promiscuous mode [Fri Jun 2 13:11:54 2023] cni0: port 2(veth92694dfb) entered disabled state [Fri Jun 2 13:11:55 2023] cni0: port 2(veth29e5e0d3) entered blocking state [Fri Jun 2 13:11:55 2023] cni0: port 2(veth29e5e0d3) entered disabled state [Fri Jun 2 13:11:55 2023] device veth29e5e0d3 entered promiscuous mode [Fri Jun 2 13:11:55 2023] cni0: port 2(veth29e5e0d3) entered blocking state [Fri Jun 2 13:11:55 2023] cni0: port 2(veth29e5e0d3) entered forwarding state [Fri Jun 2 13:11:55 2023] IPv6: ADDRCONF(NETDEV_CHANGE): veth29e5e0d3: link becomes ready [Fri Jun 2 13:13:19 2023] cni0: port 1(veth11eea1b5) entered disabled state [Fri Jun 2 13:13:19 2023] device veth11eea1b5 left promiscuous mode [Fri Jun 2 13:13:19 2023] cni0: port 1(veth11eea1b5) entered disabled state [Fri Jun 2 13:13:20 2023] cni0: port 1(veth1f7fb9e0) entered blocking state [Fri Jun 2 13:13:20 2023] cni0: port 1(veth1f7fb9e0) entered disabled state [Fri Jun 2 13:13:20 2023] device veth1f7fb9e0 entered promiscuous mode [Fri Jun 2 13:13:20 2023] cni0: port 1(veth1f7fb9e0) entered blocking state [Fri Jun 2 13:13:20 2023] cni0: port 1(veth1f7fb9e0) entered forwarding state [Fri Jun 2 13:13:20 2023] IPv6: ADDRCONF(NETDEV_CHANGE): veth1f7fb9e0: link becomes ready If I look at the logs for the kube-apiserver pod I see this repeating itself. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused" W0602 13:21:03.884015 1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to { "Addr": "127.0.0.1:2379", "ServerName": "127.0.0.1", "Attributes": null, "BalancerAttributes": null, "Type": 0, "Metadata": null Any ideas?
Kubernetes: Frequently unable to communicate with kublet api (connection refused)
It seems I was having the same problem as mentioned in this question Unable to bring up kubernetes API server The solution here worked for me containerd config default | tee /etc/containerd/config.toml sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml service containerd restart service kubelet restart
76392163
76392619
I'm trying to make a stamina meter for my game that depletes as you sprint and once it hits zero you have to wait for it to charge up again before sprinting, I don't know what I should do to fix the code and don't know where I went wrong, thanks. float moveSpeed = 5f; float sprintSpeed = 8f; float maxStamina = 5f; float currentStamina = 0f; private void Movement() { float movementX = Input.GetAxis("Horizontal"); float movementZ = Input.GetAxis("Vertical");` Vector3 moveDirection = new Vector3(movementX, 0f, movementZ); moveDirection = camera.forward * movementZ + camera.right * movementX; if (Input.GetKey(KeyCode.LeftShift) && currentStamina != maxStamina) { transform.position += (moveDirection).normalized * sprintSpeed * Time.deltaTime; currentStamina += Time.deltaTime; if (currentStamina >= maxStamina) { transform.position += (moveDirection).normalized * moveSpeed * Time.deltaTime; new WaitForSeconds(5f); currentStamina = 0f; } } transform.position += (moveDirection).normalized * moveSpeed * Time.deltaTime; } whole script just in case: private Transform transform; private Rigidbody rb; [SerializeField] private Transform camera; float moveSpeed = 5f; float sprintSpeed = 8f; float maxStamina = 5f; float currentStamina = 0f; float increaseStaminaPerSecond = 1f; void Start() { transform = GetComponent<Transform>(); rb = GetComponent<Rigidbody>(); rb.freezeRotation = true; currentStamina = maxStamina; } void Update() { Movement(); } private void Movement() { float movementX = Input.GetAxis("Horizontal"); float movementZ = Input.GetAxis("Vertical"); Vector3 moveDirection = new Vector3(movementX, 0f, movementZ); moveDirection = camera.forward * movementZ + camera.right * movementX; if (Input.GetKey(KeyCode.LeftShift) && currentStamina != maxStamina) { transform.position += (moveDirection).normalized * sprintSpeed * Time.deltaTime; currentStamina += Time.deltaTime; if (currentStamina >= maxStamina) { transform.position += (moveDirection).normalized * moveSpeed * Time.deltaTime; new WaitForSeconds(5f); currentStamina = 0f; } } transform.position += (moveDirection).normalized * moveSpeed * Time.deltaTime; }
How can I create a stamina meter that depletes as I sprint and recharges after waiting in Unity using C#?
Encapsulate and simplify your logic so its easier to read and to maintain. You need to introduce a few variables that will help you keep things clean and more readable. Additionally, when unsure, write things as if you would talk in a conversation so that they make sense. For example, you wouldn't be able to sprint if you had no stamina, and not if your stamina wasn't "full", so write that in code if (currentStamina > 0) canSprint, etc. private Transform transform; private Rigidbody rb; [SerializeField] private Transform camera; float moveSpeed = 5f; float normalSpeed = 5f; // add this float sprintSpeed = 8f; float maxStamina = 5f; float currentStamina = 0f; float increaseStaminaPerSecond = 1f; bool isSprinting = false; // add this to check if player is sprinting bool canSprint = true; // add this to check if player can sprint void Start() { transform = GetComponent<Transform>(); rb = GetComponent<Rigidbody>(); rb.freezeRotation = true; currentStamina = maxStamina; } void Update() { UpdateStamina(); Sprint(); Movement(); } // simplify logic to do one thing private void Movement() { Vector3 movementInput = GetMovementInput(); Vector3 moveDirection = camera.forward * movementInput.z + camera.right * movementInput.y; transform.position += moveDirection.normalized * moveSpeed * Time.deltaTime; // apply movement only once } // separate sprinting logic private void Sprint() { if (Input.GetKey(KeyCode.LeftShift) && canSprint && currentStamina > 0) { moveSpeed = sprintSpeed; isSprinting = true; } else { moveSpeed = normalSpeed; isSprinting = false; } } private void UpdateStamina() { float staminaToAdd = isSprinting ? -Time.deltaTime : Time.deltaTime; // if sprinting, decrease stamina by Time.deltaTime, otherwise increase currentStamina = Mathf.Clamp(currentStamina + staminaToAdd, 0, maxStamina); // prevent from going below 0 and over maxStamina if (currentStamina <= 0 && canSprint) // if no more stamina but could sprint up to this point { StartCoroutine(SprintCooldownRoutine()); // start cooldown } } private Vector3 GetMovementInput() { float movementX = Input.GetAxis("Horizontal"); float movementZ = Input.GetAxis("Vertical"); return new Vector3(movementX, 0f, movementZ); } private IEnumerator SprintCooldownRoutine() { canSprint = false; // disable sprinting while (currentStamina < maxStamina) { currentStamina += Time.deltaTime; // this will increase stamina twice as fast, because we're already increasing it inside UpdateStamina. Remove this line if you wish to "just wait for stamina to refresh" yield return null; } canSprint = true; }
76381814
76393564
How to share a variable from one test (it) to another when the domains are different? I've tried in countless ways, with Alias, Closure, Environment Variable, Local Storage, even with Event Listener, but when the next test is executed, these variables are cleared from memory. The point is that I need to obtain the ID of an open protocol in a Web application, go to the backoffice that is in another domain to validate if that protocol was really opened. Here is the last version after giving up... /// <reference types="cypress" /> describe("Testar abertura de protocolo no fale conosco", () => { it("Deve acessar o FaleConosco, abrir um protocolo e depois validar no backoffice a abertura correta do mesmo", () => { cy.visit(`${Cypress.env('FALE_CONOSCO_URL')}`) cy.get("#BotaoCriarNovoChamado").click() cy.get('#InputLabelCpfCnpj').type("99999999999") cy.get('#InputLabelEmail').type("[email protected]") cy.get('#InputLabelTelefone').type("99999999999") cy.get('#InputLabelAssunto').type("Assunto de teste") cy.get('#InputLabelDescricao').type("Essa aqui e uma descrição bem detalhada, confia") cy.get('#BotaoEnviar').click() cy.get('#spanNumeroDoChamado').should('contain', 'Número do chamado') cy.get('#divNumeroDoChamado').then($div => { const numero_do_chamado = $div.text().split(' ')[3].replace(/^#/, ""); // cy.wrap(numero_do_chamado).as("minhaVariavel"); // Enviar o valor do alias para o segundo domínio usando postMessage cy.window().then((win) => { win.postMessage({ type: "aliasValue", value: numero_do_chamado }, "*"); }); // Cypress.env('numero_do_chamado', numero_do_chamado); // cy.log("numero_do_chamado - " + Cypress.env('numero_do_chamado')); // cy.window().then(win => { // win.localStorage.setItem('numero_do_chamado', numero_do_chamado); // }); }); // cy.get('#divNumeroDoChamado').invoke("text").as("minhaVariavel") // // ($div => { // // const numero_do_chamado = $div.text().split(' ')[3].replace(/^#/, ""); // // cy.wrap(numero_do_chamado).as("minhaVariavel"); // // Enviar o valor do alias para o segundo domínio usando postMessage // cy.window().then((win) => { // win.postMessage({ type: "aliasValue", value: cy.get("@minhaVariavel") }, "*"); // }); // // // Cypress.env('numero_do_chamado', numero_do_chamado); // // // cy.log("numero_do_chamado - " + Cypress.env('numero_do_chamado')); // // // cy.window().then(win => { // // // win.localStorage.setItem('numero_do_chamado', numero_do_chamado); // // // }); // // }); }); it("Deve acessar o Conecta e validar a abertura correta protocolo", () => { cy.visit(`${Cypress.env('URL')}`); // Receber a mensagem contendo o valor do alias enviado pelo primeiro domínio cy.window().then((win) => { win.addEventListener("message", (event) => { const message = event.data; // Verificar se a mensagem contém o valor do alias if (message.type === "aliasValue") { const aliasValue = message.value; cy.wrap(aliasValue).as("meuAliasCompartilhado"); } }); }); // Fazer algo com o alias compartilhado no segundo domínio cy.get("@meuAliasCompartilhado").then((valor) => { // Faça algo com o valor do alias compartilhado cy.log("Valor do alias compartilhado:", valor); cy.login(); cy.visit(`${Cypress.env('URL')}/ticket-container/${Cypress.env('valor')}`) }); }); })
How to share a variable in Cypress from one test (it) to another when the domains are different?
When the test runner changes domains the whole browser object is reset, so any variables written to browser memory are lost closure variables aliases env var (Cypress.env). That leaves you with fixture (disk storage) or task-related pseudo data store (see bahmutov/cypress-data-session). For fixture, the code would be it("Deve acessar o FaleConosco...", () => { cy.visit(`${Cypress.env('FALE_CONOSCO_URL')}`) ... const numero_do_chamado = $div.text().split(' ')[3].replace(/^#/, "") cy.writeFile('cypress/fixtures/numero_do_chamado.json', numero_do_chamado) ... }) it("Deve acessar o Conecta...", () => { cy.visit(`${Cypress.env('URL')}`) ... const numero_do_chamado = cy.readFile('cypress/fixtures/numero_do_chamado.json') ... }) Don't use cy.fixture() command as there is caching involved internally. Not a problem for your current scenario, but may cause unexpected errors when your test pattern changes.
76389708
76391170
From the below data- col5 is holding the no of fruits to be distributed among plates from col1 to col4(4plates). Each time find the min from the plates(col1 to col4) add 1 fruit and reduce the fruit from col5 and repeat this process till col5(fruits becomes zero). Below is some sample code to find the min and add 1 fruit there. but how to do this recursively in spark - Scala. Expected output: plate1 plate2 plate3 plate4 fruits 1 2 3 4 3 6 7 8 1 2 2 4 6 8 5 Iteration 1: 2 2 3 4 2 6 7 8 2 1 3 4 6 8 4 Iteration 2: 3 2 3 4 1 (If 2 plates has the same min value , left precedence) 6 7 8 3 0 4 4 6 8 3 Iteration 3: 3 3 3 4 0 6 7 8 3 0 5 4 6 8 2 Iteration 4: 3 3 3 4 0 6 7 8 3 0 5 5 6 8 1 Iteration 5: 3 3 3 4 0 6 7 8 3 0 6 5 6 8 0 code for first iteration: val data = Seq( (1.0, 2.0, 3.0, 4.0, 3.0), (6.0, 7.0, 8.0, 1.0, 2.0), (2.0, 4.0, 6.0, 8.0, 5.0) ) val columns = List("col1", "col2", "col3", "col4", "col5") val df = spark.createDataFrame(data).toDF(columns: _*) val updatedColumns = columns.map { colName => functions.when(col(colName) === functions.least(columns.map(col): _*), col(colName) + 1).otherwise(col(colName)).alias(colName) } val updatedDF = df.select(updatedColumns: _*) updatedDF.show()
spark DF multiple Iterations on Rows
Since each row is independent, then I think it is easier to iterate within the mapping function of each row until you get the final result, so that you don't have to iterate over the whole df multiple times. import spark.implicits._ val data = Seq( (1, 2, 3, 4, 3), (6, 7, 8, 1, 2), (2, 4, 6, 8, 5) ) val columns = List("col1", "col2", "col3", "col4", "col5") val arrayDf = spark.sparkContext.parallelize(data).map(row => { var plates: List[Int] = row.productIterator.toList.map {case i: Int => i} var store: Int = plates.last // Assuming last column is the store plates = plates.dropRight(1) // remove the store column (0 until store).toList.foreach(i => { val min = plates.min val index = plates.indexOf(min) store -=1 plates = plates.updated(index, min + 1) }) plates :+ store // Add the store at the end (which is always 0) }).toDF(Seq("result"): _*) // Convert the list of result into columns again val df = arrayDf.select(columns.zipWithIndex.map{case (name, idx) => arrayDf("result")(idx).alias(name)}: _*) df.show() +----+----+----+----+----+ |col1|col2|col3|col4|col5| +----+----+----+----+----+ | 3| 3| 3| 4| 0| | 6| 7| 8| 3| 0| | 6| 5| 6| 8| 0| +----+----+----+----+----+
76391470
76392625
I am running this script Invoke-Expression $expression -ErrorAction SilentlyContinue The variable $expression may not have a value sometimes. When It's empty, I get the error Cannot bind argument to parameter 'Command' because it is null. How can I avoid seeing the error? I want to execute Invoke-Expression regardless of $expression being empty, so an if statement checking that $expression has a value wouldn't work.
Powershell: -ErrorAction SilentlyContinue not working with Invoke-Expression
First, the obligatory warning: Invoke-Expression (iex) should generally be avoided and used only as a last resort, due to its inherent security risks. Superior alternatives are usually available. If there truly is no alternative, only ever use it on input you either provided yourself or fully trust - see this answer. To add to Mathias' helpful answer and zett42's simpler alternative mentioned in a comment on the question (Invoke-Expression "$expression "): These solutions silence only the case where $null or an empty string is passed to Invoke-Expression - which may well be your intent. To also cover the case where all error output should be silenced - whether due to invalid input or due to valid input causing errors during execution - the following variation is needed: # Silences *all* errors. try { Invoke-Expression $expression 2>$null } catch { } # Alternative: # Silences *all* errors and additionally *ignores* all *non-terminating* errors, # i.e. not only silences them, but also prevents their recording in $Error. # By executing inside & { ... }, the effect of setting $ErrorActionPreference is # transitory due to executing in a *child scope*. # Note that this also means that $expression is evaluated in the child scope. & { $ErrorActionPreference = 'Ignore'; Invoke-Expression $expression } Note: First command: The common -ErrorAction parameter fundamentally only acts on non-terminating errors, whereas terminating ones (both statement- and script-terminating ones) must be handled with a try / catch / finally statement. Passing $null or the empty string to Invoke-Expression causes an error during parameter binding (that is the, cmdlet itself is never invoked, because invalid arguments were passed), which in effect is a statement-terminating error - hence the need for try / catch. The try / catch with the empty catch block additionally prevents script-terminating errors that result from the Invoke-Expression call from terminating your script too (e.g, if $expression contained something like 'throw "Fatal Error"' Note that -ErrorAction SilentlyContinue was replaced with 2>$null in order to silence non-terminating errors (e.g., the error resulting from Get-ChildItem NoSuchDir), because - inexplicably - -ErrorAction is not effective with Invoke-Expression (even though the -ErrorVariable common parameter does work, for instance). See GitHub issue #19734. Second command: Setting the $ErrorActionPreference preference variable to Ignore causes all errors to be silenced, and additionally - for non-terminating errors only - prevents their recording in the automatic $Error variable. If -ErrorAction worked in this case, -ErrorAction would have the same effect, but would act solely on non-terminating errors, as noted. (This asymmetry between what should be equivalent mechanisms - preference variable vs. per-call common parameter, is one of the pitfalls of PowerShell's error handling - see GitHub issue #14819). Unfortunately, (caught) terminating errors are invariably recorded in $Error as of PowerShell 7.3.4. From what I can tell, changing this in a future version has been green-lit a while ago, but is yet to be implemented: see GitHub issue #3768 Using &, the call operator with a script block { ... } executes the enclosed statements in a child scope. This causes $ErrorActionPreference = 'Ignore' to create a local copy of the preference variable, which automatically goes out of scope when the script block is exited, thereby implicitly restoring the previous value of $ErrorActionPreference. However, this also means that the code executed by Invoke-Expression executes in that child scope. If that is undesired, forgo the & { ... } enclosure and save and restore the previous $ErrorActionPreference value.
76390816
76392656
Is there a possibility to find an element by ID with an wildcard? I have something like this: stat = GC.FindElementByXPath("//*[@id='C29_W88_V90_V94_admin_status']").Value stat = GC.FindElementByXPath("//*[@id='C26_W88_V90_V94_admin_status']").Value stat = GC.FindElementByXPath("//*[@id='C29_W88_V12_V94_admin_status']").Value The admin_status part won't change but the values before. Sometimes I have the same problem with another element with values around it. So the best thing would be to find the element with some wildcard like this: stat = GC.FindElementByXPath("//*[@id='*admin_status*']").Value
VBA Selenium Dynamic ID (Wildcard)
This code is tested in Excel VBA using Selenium. Option Explicit Sub sbXPathContains() Dim driver As ChromeDriver Set driver = New ChromeDriver Dim sURL As String sURL = "https://davetallett26.github.io/table.html" Call driver.Start("edge") driver.get (sURL) driver.Window.Maximize sbDelay (100000) MsgBox driver.FindElementByXPath("//*[contains(@id, 'ctl03_txtCash')]").Attribute("outerHTML") ' //* = any element contains @id = within id 'ctl03_txtCash' = string to find .Attribute("outerHTML") = return HTML of the element sbDelay (100000) driver.Quit End Sub Sub sbDelay(delay As Long): Dim i As Long: For i = 1 To delay: DoEvents: Next i: End Sub
76390331
76391176
When I visit the Firebase Firestore Index page, it shows an issue as "Oops, indexes failed to load!". I inspect the error and it shows 429. When I view indexes through GCP, it shows as Failed to load indexes: Request throttled at the client by AdaptiveThrottler. We're in the Firebase Blaze plan and I do not see any quota limit reached. This particular account has 5 projects and all of the projects indicate this same issue. What would be the issue?
Firebase Firestore Indexes issue: Oops, indexes failed to load
firebase here That looks off indeed. I can't reproduce myself, but I asked around and will post an update here when I hear back. 7:57 AM PT: Engineering has acknowledged the problem, and is identifying potential causes. 8:12 AM PT: The problem may be isolated to databases in Europe. 9:23 AM PT: This only affects index creation and listing operations. It does not affect the ability read data from or write to the database. Sorry for not mentioning that earlier, as it was the first thing we determined when the problem was reported. 9:29 AM PT: We've identified the root cause, which was a resource exhaustion issue in region eur3, and have increased resources in the region to mitigate. 9:49 AM PT: This issue has been mitigated now. If you are still seeing this problem, please reach out to Firebase support for personalized help in troubleshooting. The engineering team is planning work to prevent this problem from reoccurring in the future.
76388475
76393618
According Ada2012 RM Assertion_Policy: 10.2/3 A pragma Assertion_Policy applies to the named assertion aspects in a specific region, and applies to all assertion expressions specified in that region. A pragma Assertion_Policy given in a declarative_part or immediately within a package_specification applies from the place of the pragma to the end of the innermost enclosing declarative region. The region for a pragma Assertion_Policy given as a configuration pragma is the declarative region for the entire compilation unit (or units) to which it applies. This means that if I have a package hierarchy as per the following example: └───Root ├───Child1 ├───Child2 │ └───GrandSon └───Child3 And if I define the pragma Assertion_Policy at Root package specification, it will affect to the whole package hierarchy right?
Ada2012: Assertion_Policy
And if I define the pragma Assertion_Policy at Root package specification, it will affect to the whole package hierarchy right? No. What your bolded text means is that (a) the pragma is placed immediately in a specification, like so: Pragme Ada_2012; -- Placed "immediately". Pragma Assertion_Policy(Check); -- Also "immediately". Package Some_Spec is --... and so on. or (b) in a declarative part: Procedure Outer_Scope is Pragma Assertion_Polucy( Ignore ); -- Declarative region. X : Some_Type:= Possibly_Assertion_Failing_Operation; Package Inner_Scope is -- the stuff in here would ALSO ignore assertions. End Inner_Scope; Package Body Inner_Scope is Separate; Begin if X'Valid then Null; -- Do things on the valid value of X. end if; -- Because we're ignoring the invalid case. End Outer_Scope; So, they apply not to children, but to the spec/body/declarative-region itself.
76391482
76392685
I want to plot some functions with their gradients using the ggplot2 package in r. p = 3 n0 = 100 z0 = seq(0.01, 0.99, length = n0) AB0 = matrix(rbeta(600,4,1), nrow = n0) library(ggplot2) ab.names=c(paste("g",1:p,sep=""),paste("g' ",1:p,sep="")) pl0=ggplot(data.frame(ab = c(AB0), ID = rep(ab.names, each = n0), Z = z0), aes(x = Z, y = ab)) + geom_point() + facet_wrap(~ID, scales = "free",nrow = 2, ncol = p) + theme_bw() + ggtitle("Unpenalized VCM",expression(nabla~"g and "~"g")) + ylab("") I want to switch the rows and add the nabla symbol in the title of the plots in the 1st row before switching them to the second row. To be clear, the 1st row is for the functions, and the 2nd row is for the gradients where the nabla symbol should appear in the title. Here is a screenshot of the outcome
in ggplot2, print an expression in the facet_wrap
the labeller argument to facet_wrap might come in handy, if you set it to "label_parsed". Example: d <- data.frame(x = rnorm(2), y = rnorm(2), ID = c(paste0('g~', 1:2), paste0('g~nabla~', 1:2) ) ) d |> ggplot(aes(x, y)) + geom_point() + facet_wrap(. ~ ID, labeller = label_parsed, nrow = 2 )
76389930
76391235
I have a Postgresql table column which's type is numeric(20,4). When I play with the data through rails console, ActiveRecord display the values in scientific notation. All I can think of adding attribute :column_name, :float to the related modal, but not sure if there will be any side effects because of it. So the question is; does anyone one if there is anything I can do to make it easier for reading the data on rails console and do not have any side effects on the app itself?
Easy to read scientific notation on Rails console
You can monkeypatch specifically BigDecimal class specifically for console development.rb console do class BigDecimal def inspect to_f end end end
76389774
76391268
I have one seperate database for saving logs I have middleware that saves every request with response public function handle($request, Closure $next, $logType) { $response = $next($request); CreateLog::dispatch($request, $response, $logType)->onQueue(config('queue.queues.logging')); return $response; } Everything related to auth progress works after every work complated. beacuse u used $response = $next($request); 'user_id' => auth()->id(), Trying to save user_id like that. Everything is working perfect But in prod server user_id is being null In prod server project running using docker
Laravel auth()->id() not working in prod server
you need to double check multiple things: what session driver are you using? what is the queue driver are you using? is your middleware runs after session and auth middleware runs ( based on the configuration in kernal.php ) based on your question and the problem is happening only on production so probably your issue is communication, your queue handler can't access session data which is usually the case if you separate them into different modules and connections to improve performance so to achieve this you need to pass the information you need from one module to another so in your case public function handle($request, Closure $next, $logType) { $response = $next($request); $userID = auth()->id(); CreateLog::dispatch($request, $response, $logType, $userID)->onQueue(config('queue.queues.logging')); return $response; } and in your CreateLog you store this userID and use it instead of depending on the auth
76394054
76394106
I'm trying to create a blind auction as part of my class, and I'm trying to get the max value. Here is the code, I tried to use the max() function but I get this error: File "Day_9_Blind_Auction.py", line 30, in <module>print(max(data))TypeError: '>' not supported between instances of 'dict' and 'dict' import os data = [ ] while True: name = input("What is your name?: ") bid = input("What is your bid: $") other_user = input("Are there any other bidders? Type Yes or No.\n") if other_user in ['yes', 'Yes']: os.system('cls') def new_user(name, bid): brandnew_user = { name: bid }, data.append(brandnew_user) new_user(name, bid) if other_user in ['no', 'No']: print(max(data)) break I tried to remove the max() and it prints the output totally fine. But then I need to get the max value, which is I don't know how to do it. this is the output if I removed the max() function: [({'George': '87'},), ({'Josh': '74'},), ({'Eduardo': '89'},)]
How can I get the maximum bid value in a blind auction?
You are trying to apply the max() function to a list of dictionaries. The error message is telling you that you haven't specified how to tell when one dictionary is "greater than" another. It is possible to add a definition for the '>' operator that would allow it to compare dictionaries, but it's probably better if you rethink the data structures you're using. A list of dictionaries, with only one key/value pair per dictionary, is kind of an awkward construction. Why not use one list for the names, another list for the bids, then use max() and indexof() to find the largest bid and get the corresponding name from the other list?
76389312
76391303
Currently, when i need to access my customer's PayPal merchant account (to manage IPNs or to update notifications preferences for examples), i login to the account using his credentials and his 2FA (asking him the SMS code). Is there a feature similar to Stripe Team which allows team members to access a PayPal merchant dashboard with their own credentials? Right here i am talking about the account dashboard, not the developer dashboard, but my question applies to both! All of this doesn't seem very efficient and secure, is there another way? Am i supposed to use my credentials only to develop the website (on developer.paypal.com) and never connect to the actual merchant's PayPal account? And ask the customer to do every manipulation on his account himself telling him the way?
How can I securely manage my client's PayPal merchant account as a web developer freelancer?
Typically you should only need to obtain API credentials to integrate with, which for current solutions are in the developer dashboard. Other account settings and operations in the account are not something web developers need access to, other than maybe some initial setup tweaks to the website preferences page. To the extent other settings matter, it is likely you are making wrong integration choices or using old legacy PayPal products (not good--for instance, you tagged the above with IPN -- what year is it? why would you be using IPN in 2023?) Anyway, it is possible to create and manage user access within a PayPal account. This is usually only used for in-house staff, not a vendor/contractor since you should not need it, but in theory it can be used for what you're asking.
76391544
76392695
why below code run in the development mode? it should only run in production mode Here is the running scripts in package.json "build-watch-dev": "vite build --mode development --watch", import { useStore } from '@/store'; import { register } from 'register-service-worker'; export function registerSW() { if (import.meta.env.PROD) { register(`${import.meta.env.BASE_URL}service-worker.js`, { ready() { console.log( 'App is being served from cache by a service worker.\n' + 'For more details, visit ) }, registered() { console.log('Service worker has been registered.') }, cached() { console.log('Content has been cached for offline use.') }, updatefound() { console.log('New content is downloading.') }, updated() { console.log('New content is available; please refresh.'); const $store = useStore(); $store.serviceWorkerUpdate = true; }, offline() { console.log('No internet connection found. App is running in offline mode.') }, error(error) { const $store = useStore(); if ($store.isOnline) { console.error('Error during service worker registration:', error) } } }) } }
import.meta.env.PROD runs in development mode
The --mode flag overrides the value of import.meta.env.MODE. import.meta.env.PROD is effectively a boolean result of process.env.NODE_ENV === 'production' You could try changing your conditional to something like: if (import.meta.env.MODE !== 'development') { Or you could look at setting NODE_ENV to development.
76394111
76394119
So im trying to make a typescript file for a network and there are three files involved. a _Node.ts, Edge.ts, and a Graph.ts These three files look like so: _Node.ts interface _Node { data: any; neighbours: number[]; } class _Node { constructor(data) { // this data is an arbitrary thing with which I can create any object this.data = { ...data }; // the neighbours bit is explicity set from the code outside this.neighbours = []; } } export { _Node }; Edge.ts interface Edge { start: number; end: number; data: any; } class Edge { constructor(start, end, data) { this.start = start; this.end = end; this.data = { ...data }; } } export { Edge }; Graph.ts import { _Node } from "./_Node"; import { Edge } from "./Edge"; interface Graph { nodes: _Node[]; edges: Edge[]; } class Graph { constructor(nodes, edges) { this.nodes = nodes; this.edges = edges; // execute Internal methods // this.printData(); } // test function printData() { const message = "This is a graph with " + this.nodes.size + // this gives an error "Property size doesn't exist on nodes" " nodes and " + this.edges.size + " edges"; console.log(message); } } The last line of node.size and edges .size give a VS code error of something like the property size doesn't exist on them despite me declaring them as arrays of the nodes and things - is there a way to fix it?
Typescript class made up of an array of other classes gives an error
You're looking for this.edges.length as specified by https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/length size is not a valid property or function
76384590
76391320
Help me figure out what I'm doing wrong? I need to get the number of comments in 24 hours in the relation one-to-many table. class Comment(Base): __tablename__ = 'comments' id = Column(Integer, primary_key=True) task_id = Column(Integer(), ForeignKey('tasks.id'), nullable=False) post_link = Column(String, nullable=False) date = Column(DateTime, default=datetime.datetime.utcnow()) def __init__(self, task_id: int, post_link: str): super().__init__() self.task_id = task_id self.post_link = post_link def __repr__(self): return f'id - {self.id} | task_id - {self.task_id} | date - {self.date}' class Task(Base): __tablename__ = 'tasks' id = Column(Integer, primary_key=True) name = Column(String, nullable=False) comments = relationship('Comment', backref='tasks', lazy=True) def __init__(self, name: str): super().__init__() self.name = name def __repr__(self): return f'id - {self.id} | name - {self.name}' I don't know if the query works correctly or if it even outputs the number of records. Here's the request itself: async def get_comments_for_day(): start_day = datetime.utcnow() - timedelta(hours=23, minutes=50) async with get_async_session() as session: stmt = select(Comment.task_id, func.count(Comment.task_id).label('comments_found'))\ .where(Comment.date >= start_day).subquery() main_stmt = select(Task, stmt.c.comments_found).outerjoin(stmt, Task.id == stmt.c.task_id) results = await session.execute(main_stmt) return results.scalars().all() async def main(): tasks = await get_comments_for_day() for task, comments_found in tasks: print(task.name, comments_found) I get this error: for task, comments_found in tasks: TypeError: cannot unpack non-iterable Task object
Sqlalchemy count of records in the relation of one to many
async def get_comments_for_day(): start_day = datetime.utcnow() - timedelta(hours=24) async with get_async_session() as session: stmt = ( select(func.count(Comment.id), Task) .select_from(Task) .join(Comment, Task.id == Comment.task_id, isouter=True) .where(Comment.date >= start_day) .group_by(Task.id, Task.name) ) results = await session.execute(stmt) return results.all() If you're interested, I found a solution to the problem, which was to write return results.all() instead of return results.execute().all().
76389324
76391414
I have a tag column in the active admin index page like tag_column :result, interactive: true, sortable: false in my model I have a enum for result as enum result: { winner: 0, first_runner_up: 1, second_runner_up: 2} In the index page when I click on drop down the selection options are being dispalyed as winner first_runner_up second_runner_up How can we display them as Winner First Runner Up Second Runner Up Can someone please help... Thank you in advance
How to change the tag column selection option
I suggest you break the rules out to a hash.. then you will be free to validate against them later WINNER_OPTIONS = { winner: { title: "Winner" }, second: { title: "First Runner Up" }, third: { title: "Second Runner Up" } RESULT_ENUMS = WINNER_OPTIONS .keys .with_index .to_h enum result: RESULT_ENUMS tag_column :result, WINNER_OPTIONS.values.map{|e| e[:title]}.with_index.to_h
76390202
76391507
Let's assume I have a table users with a JSON column "foo". Values of that column look like this: "{ field1: ['bar', 'bar2', 'bar3'], field2: ['baz', 'baz2', 'baz3'] }" where field1 and field2 are optional, so column value may look like this: "{}" I want to move values 'bar2' and 'bar3' from field1 to field2 for all records in this table. Sample Output: "{ field1: ['bar'], field2: ['baz', 'baz2', 'baz3', 'bar2', 'bar3']}" More examples: "{ field1: ['bar3'], field2: ['baz', 'baz2', 'baz3'] }" should be transformed into "{ field1: [], field2: ['baz', 'baz2', 'baz3', 'bar3'] }" etc. Is there any way to do this? Unfortunately I have no idea how to approach this problem.
How can I move specific values from one field in a JSON column to another in PostgreSQL?
Using a series of subqueries with jsonb_array_elements select jsonb_build_object('field1', coalesce((select jsonb_agg(v.value) from jsonb_array_elements(u.foo -> 'field1') v where v.value#>>'{}' not in ('bar2', 'bar3')), '[]'::jsonb), 'field2', (u.foo -> 'field2') || (select jsonb_agg(v.value) from jsonb_array_elements(u.foo -> 'field1') v where v.value#>>'{}' in ('bar2', 'bar3'))) from users u where u.foo ? 'field1' and u.foo ? 'field2' See fiddle
76391464
76392770
As far as what I understand: WorkerService is the new way to define a Windows Service (app that run as as service). By default, using the contextual menu on the project, the type of configuration file associated to a WorkerService is stil a xaml file: "App.config". See: How to: Add an application configuration file to a C# project According to Microsoft documentation, I understand that I should use the section <appsettings>. appSettings is a dictionary of key value pair. But I can't find how to add a list of item as the value. Also, I would like to add my own object as the value if possible. Is there a way to add a List of MyConfigObject (List<MyConfigObject>)into a config file? If yes, how? Should I use another section or should I use another type of config file (json, yaml) to have the simplest way to read/write settings?
How to add a list of specific object in a configuration file for a WorkerService
Yes, you can, Like this: appsettings.json { "Logging": { "LogLevel": { "Default": "Information", "Microsoft": "Warning", "Microsoft.Hosting.Lifetime": "Information" } }, "AllowedHosts": "*", "MyConfigObject": [ { "Prop1": "1", "Prop2": "2" }, { "Prop1": "1", "Prop2": "2" } ] } MyConfigObject Class public class MyConfigObject { public string Prop1 { get; set; } public string Prop2 { get; set; } } In Startup, register the Configuration with the type MyConfigObject. Like this: public void ConfigureServices(IServiceCollection services) { services.AddControllersWithViews(); services.Configure<List<MyConfigObject>>(Configuration.GetSection("MyConfigObject")); } Now you can use it in any service or controller, like this: public class HomeController : Controller { private readonly ILogger<HomeController> _logger; private readonly List<MyConfigObject> myConfig; public HomeController(IOptions<List<MyConfigObject>> myConfig, ILogger<HomeController> logger) { _logger = logger; this.myConfig = myConfig.Value; } public IActionResult Index() { return View(); } }
76391280
76392775
So I'm getting Uncaught Error: when using a middleware builder function, an array of middleware must be returned This is my code import { configureStore, compose, combineReducers, applyMiddleware } from "@reduxjs/toolkit"; import thunk from "redux-thunk"; const rootReducer = combineReducers({}); const middleware = applyMiddleware(thunk); const composeWithDevTools = window.__REDUX_DEVTOOLS_EXTENSION_COMPOSE__ || compose; const composedMiddleWare = composeWithDevTools(middleware) const store = configureStore({ reducer: rootReducer, middleware: composedMiddleWare, devTools: window.__REDUX_DEVTOOLS_EXTENSION__ && window.__REDUX_DEVTOOLS_EXTENSION__() }) export default store; I have no idea what's wrong and searching doesn't seem to be returning any useful result.
redux tookit when using a middleware builder function, an array of middleware must be returned
configureStore from redux-toolkit (RTK) works differently from the redux createStore function. The middleware property accepts an array of middleware to install and RTK handles applying them. See configureStore /** * An array of Redux middleware to install. If not supplied, defaults to * the set of middleware returned by `getDefaultMiddleware()`. */ middleware?: ((getDefaultMiddleware: CurriedGetDefaultMiddleware<S>) => M) | M Note that while applyMiddleware is re-exported from redux that You should not need to use this directly. The following would be the code you'd use: import { configureStore, combineReducers } from "@reduxjs/toolkit"; import thunk from "redux-thunk"; const rootReducer = combineReducers({}); const store = configureStore({ reducer: rootReducer, middleware: [thunk], // <-- array of middlewares to install/apply devTools: window.__REDUX_DEVTOOLS_EXTENSION__ && window.__REDUX_DEVTOOLS_EXTENSION__() }); export default store; RTK also ships with opinionated optimizations: The devTools are enabled by default, you'll only use this if you have custom needs. Thunk middleware is also already included with the default middleware, e.g. if no middleware property is provided, then getDefaultMiddleware is provided and returns the following middleware value, which includes the thunk middleware: const middleware = [ actionCreatorInvariant, immutableStateInvariant, thunk, // <-- included thunk middleware! serializableStateInvariant, ] Again, you'll really only need to specify the middleware property if you need to customize the immutability or serialization middlewares. Your store configuration can be reduced to the following: import { configureStore, combineReducers } from "@reduxjs/toolkit"; const rootReducer = combineReducers({}); const store = configureStore({ reducer: rootReducer, }); export default store; Both the thunk middleware and dev tools will be active and working.
76390228
76391647
I have a this code: import logging import json import os from azure.cosmos import CosmosClient import azure.functions as func url = os.environ["ACCOUNT_URI"] key = os.environ["ACCOUNT_KEY"] client1 = CosmosClient(url, key) client = CosmosClient.from_connection_string(os.environ["CosmosDBConnStr"]) database_name = "dmdb" container_name = "DataContract" database = client.get_database_client(database_name) container = database.get_container_client(container_name) logging.info(f"container: {url}") def main(myblob: func.InputStream, doc: func.Out[func.Document]): logging.info(f"Python blob trigger function processed blob \n" f"Name: {myblob.name}\n") #reading file from blob contract_data=myblob.read() try: logging.info(f"contract data: {contract_data}") contract_json = json.loads(contract_data) version = contract_json.get("version") name = contract_json.get("name") title = contract_json.get("title") logging.info(f"contract json: {contract_json}") query = "SELECT c.version,c.name,c.title,c.Theme,c.description,c['data owner'],c.confidentiality,c.table1 FROM c " items = list(container.query_items( query=query, enable_cross_partition_query=True )) logging.info(f"item: {items[0]}") for item in items: if item["name"] == name and item["version"] == version: if item["title"] == title: logging.info(f"Skipping, item already exists: {item}") return # Skip saving the document container.upsert_item(body=contract_json,pre_trigger_include = None,post_trigger_include= None) return doc.set(func.Document.from_json(contract_data)) except Exception as e: logging.info(f"Error: {e}") I added one document in my cosmosdb, I would like to replace same document has different title. but i could not do that i am getting Code: BadRequest Message: Message: {"Errors":["One of the specified inputs is invalid"]} this is an example of item from my query: {'version': 'V1', 'name': 'demo_contract2', 'title': 'title122', 'Theme': 'Theme12', 'description': 'test data contract management2', 'data owner': '[email protected]', 'confidentiality': 'open', 'table1': {'description:': 'testen', 'attribute_1': {'type': 'int', 'description:': 'testen', 'identifiability': 'identifiable'}}} this is an example of contract_json from my file: {'version': 'V1', 'name': 'demo_contrac1', 'title': 'title1', 'Theme': 'Theme1', 'description': 'test data contract management2', 'data owner': '[email protected]', 'confidentiality': 'open', 'table1': {'description:': 'testen', 'attribute_1': {'type': 'int', 'description:': 'testen', 'identifiability': 'identifiable'}}} they are matching. How should I regulate my upsert_item function in my code?
upsert_item into cosmosdb via python
You are missing the id in the Upsert content. A document's identity is defined by the combination of id and Partition Key value, if you don't specify the id then Upsert will always behave as a Create operation. Because you are getting the items through a query, just add the id: query = "SELECT c.id, c.version,c.name,c.title,c.Theme,c.description,c['data owner'],c.confidentiality,c.table1 FROM c " You can now get the id from item["id"] to use if needed. The body of the Upsert operation should contain id, and if that matches with an existing document, then the document will get updated.
76381180
76394128
I get two different intercept values from using the statsmodels regression fit and the numpy polyfit. The model is a simple linear regression with a single variable. From the statsmodels regression I use: results1 = smf.ols('np.log(NON_UND) ~ (np.log(Food_consumption))', data=Data2).fit() Where I recieve the following results: coef std err t P>|t| [0.025 0.975] -------------------------------------------------------------------------------------------- Intercept 5.4433 0.270 20.154 0.000 4.911 5.976 np.log(Food_consumption) 1.1128 0.026 42.922 0.000 1.062 1.164 When plotting the data and adding a trendline using numpy polyfit, I recieve a different intercept value: x = np.array((np.log(Data2.Food_consumption))) y = np.array((np.log(Data2.NON_UND)*100)) z = np.polyfit(x, y, 1) array([ 1.11278898, 10.04846693]) How come I get two different values for the intercept? Thanks in advance!
Different intercept values for linear regression using statsmodels and numpy polyfit
This is because you are using different linear models in the first and second regressions. In the first regression, you take logs of both the dependent and independent variables, while in the second regression, you are not, and additionally, you are multiplying y by 100. In order to get the same results as the first regression in the second specification, you need to make sure the regression model is exactly the same as the first one. I suggest you do this: x = np.log(np.array(((Data2.Food_consumption)))) y = np.log(np.array(((Data2.NON_UND)))) z = np.polyfit(x, y, 1) And then the output you get with the second function should be the same as the one you get in the first one.
76390103
76391730
I am new to MongoDB and trying to get multiple documents using ObjectId Below I have mentioned the demo data format. db={ "store": [ { "_id": ObjectId("63da2f1f7662144569f78ddd"), "name": "bat", "price": 56, }, { "id": ObjectId("63da2f1f7662144569f78ddc"), "name": "ball", "price": 58, }, { "id": ObjectId("63da2f1f7662144569f78ddb"), "name": "cap", "price": 100, }, { "id": ObjectId("63da2f1f7662144569f78dda"), "name": "red", "price": 50, }, ]} and my query is this db.store.aggregate([ { $match: { id: { $in: [ ObjectId("63da2f1f7662144569f78ddd"), ObjectId("63da2f1f7662144569f78ddb"), ObjectId("63da2f1f7662144569f78dda") ] } } }, { $group: { _id: null, totalPrice: { $sum: "$price" } } }, { $project: { _id: 0, } } ]) and output is [ { "totalPrice": 150 } ] but I'm getting objectid like this { "ids": ["63da2f1f7662144569f78ddd","63da2f1f7662144569f78ddb","63da2f1f7662144569f78dda"] } if I pass this array it's not getting any documents. How can I pass this array in place of objectids????
I want get documents using ObjectId, but it's not getting that document
For your case, you can simply wrap the payload from client in a $map to convert them into ObjectIds with $toObjectId db.store.aggregate([ { $match: { $expr: { "$in": [ "$id", { "$map": { // your payload from client here "input": [ "63da2f1f7662144569f78ddd", "63da2f1f7662144569f78ddb", "63da2f1f7662144569f78dda" ], "as": "id", "in": { "$toObjectId": "$$id" } } } ] } } }, { $group: { _id: null, totalPrice: { $sum: "$price" } } }, { $project: { _id: 0, } } ]) Mongo Playground
76392245
76392789
I have an issue I cannot seem to fix. I have a function that takes a file and converts it to an array using the first row as the keys: function parseCSVToArray($filePath) { $csvData = []; if (($handle = fopen($filePath, "r")) !== false) { $keys = fgetcsv($handle); // Get the first row as keys while (($data = fgetcsv($handle)) !== false) { $rowData = array(); foreach ($keys as $index => $key) { $rowData[$key] = $data[$index] ?? ''; // Assign each value to its corresponding key } $csvData[] = $rowData; } fclose($handle); } return $csvData; } Everything works as normal and creates the array as expected: $getTheRecords = parseCSVToArray('Data/records.csv'); // File contents record,subdomain,hub_knows,domain,type,value,action,rationale mail.sub.domain.com.,sub.domain.com,Hub knows about this,domain.com,CNAME,dispatch.domain.com.,DELETE,Dispatch links can go Array ( [record] => mail.sub.domain.com [subdomain] => sub.domain.com [hub_knows] => Hub knows about this [domain] => domain.com [type] => CNAME [value] => dispatch.domain.com. [action] => DELETE [rationale] => Dispatch links can go ) Now the issue is when I go to use or print the data. When I loop through the array using: foreach($getTheRecords as $element) { echo "<div style='margin-bottom: 20px'>"; echo($element['subdomain']); // This will print the subdomain as expected. echo "</div>"; } If I change 'subdomain' to 'record' it prints nothing. However, every other 'key' prints the results just fine. Thank you in advance for your help! I have tried changing the name of the first key to 'mainrecord' or anything and it still will not print out. Iside loop var_dmup(): array(8) { ["record"]=> string(31) "mail.0lemonade.starchapter.com." ["subdomain"]=> string(25) "0lemonade.starchapter.com" ["hub_knows"]=> string(20) "Hub knows about this" ["domain"]=> string(17) "scdomaintest3.com" ["type"]=> string(5) "CNAME" ["value"]=> string(22) "dispatch.scnetops.com." ["action"]=> string(6) "DELETE" ["rationale"]=> string(21) "Dispatch links can go" }
PHP Issue with array not printing the first element of the array
Your file likely has a UTF8 Byte Order Mark [BOM] at the beginning which is throwing off the first key. While a BOM isn't necessary at all for UTF8, some programs still add it as a "hint" that the file is UTF8. If you var_dump($keys[0], bin2hex($keys[0]) you'll likely see that the first key's is longer than what is visible, and the hex output will show it prefixed with EFBBBF which is the BOM. Try replacing: $keys = fgetcsv($handle); With: $keys = str_getcsv(preg_replace("/^\xef\xbb\xbf/", "", fgets($handle))); Which will trim off the BOM, if it exists. Edit: A bit more broadly-applicable code. function stream_skip_bom($stream_handle) { if( ! stream_get_meta_data($stream_handle)['seekable'] ) { throw new \Exception('Specified stream is not seekable, and cannot be rewound.'); } $pos = ftell($stream_handle); $test = fread($stream_handle, 3); if( $test !== "\xef\xbb\xbf" ) { fseek($stream_handle, $pos); } } so after opening $handle you would call simply: stream_skip_bom($handle);
76392254
76392834
Whenever I subscribe to a collection of documents, I'm able to extract the changes to the documents, given that the listener returns DocumentChange. This way I can understand whether a given doc was created, modified or deleted. How to get the DocumentChange when subscribing to a single document in the collection? The issue appears to be that this time the listener returns DocumentSnapshot instead of DocumentChange.
Firestore subscribed document change type
The DocumentSnapshot contains everything you need to know about that one document. Either it exists() with data, or it does not. You can check this state with each new snapshot for that one document, and react to that any way you like. Since you are not performing a query with variable set of results, there is no need for more information.
76394114
76394144
These are the ttk.radiobuttons I am using: def radioButtonGen(self): self.var = tk.IntVar() self.arithButton = ttk.Radiobutton(self, text = "Arithmetic Series", variable = self.var, value = 1, command = lambda: self.buttons.seqChoice("arithmetic")) self.geomButton = ttk.Radiobutton(self, text = "Geometric Series", variable = self.var, value = 2, command = lambda: self.buttons.seqChoice("geometric")) I have attempted to deselect them with deselect() but it only works for tk.radiobuttons, which don't look as good and the method still leaves them with a dash. I want to deselect them so they are not highlighted
Is it possible to deselect a ttk.Radiobutton?
Use the associated tkinter variable to deselect them: self.var.set(0)
76392055
76392839
library(stringr) string <- string <- c("pat1 hello333\n pat2 ok i mean pat1 again pat2 some more text pat1") I want to match all strings that start with pat1 and end with pat2. > str_extract_all( string, regex( "pat1.+pat2", dotall=TRUE ) ) [[1]] [1] "pat1 hello333\n pat2 ok i mean pat1 again pat2" This gives me 1 string that starts with pat1 and ends with pat2. However, my desired output is something like: > output [1] "pat1 hello333\n pat2" [2] "pat1 again pat2"
How to match multiple occurrences of strings given a start and end pattern in R?
Change .+ to .+? to have non greedy match. library(stringr) str_extract_all(string, regex("pat1.+?pat2", dotall=TRUE))[[1]] #[1] "pat1 hello333\n pat2" "pat1 again pat2" You can use gregexpr and regmatches with pat1.*?pat2 or in case they should be on a word boundary with \\bpat1\\b.*?\\bpat2\\b. Where .*? matches everything but minimal. regmatches(string, gregexpr("pat1.*?pat2", string))[[1]] #[1] "pat1 hello333\n pat2" "pat1 again pat2" regmatches(string, gregexpr("\\bpat1\\b.*?\\bpat2\\b", string))[[1]] #[1] "pat1 hello333\n pat2" "pat1 again pat2"
76390407
76391800
I'm working on a GTKmm application(a simple text editor, as an exercise), in which I have a notebook to which I want to add a tab. The notebook shows, but the newly added tab doesn't. I'm doing it in the following way: void MainWindow::AddTabToNotebook() { Gtk::Box box; notebook->append_page(box); notebook->show_all(); } MainWindow is a class that inherits Gtk::Window and contains a pointer to Gtk::Notebook which is loaded from a Glade file using Gtk::Builder. Whenever I click the button that calls the function I get the following message in the terminal: Gtk-CRITICAL **: gtk_notebook_get_tab_label: assertion 'list != NULL' failed. Any help is appreciated. MainWindow.h: #ifndef MAIN_WINDOW_H #define MAIN_WINDOW_H #include <gtkmm.h> class MainWindow : public Gtk::Window { protected: Gtk::Button* buttonOpenFile; Gtk::Button* buttonSave; Gtk::Button* buttonSaveAs; Gtk::Button *dialogButonOpen; Gtk::TextView *text; Gtk::MenuButton *buttonMenu; Gtk::Notebook *notebook; Gtk::FileChooserDialog *openFileDialog; Glib::RefPtr<Gtk::Builder> builder; void AddTabToNotebook(); void OnButtonOpenFileClick(); void OnButtonSaveClick(); void OnButtonSaveAsClick(); void OnFileChosen(); void OnDialogButtonOpenClick(); public: MainWindow(BaseObjectType *cobject, const Glib::RefPtr<Gtk::Builder> &refGlade); ~MainWindow(); }; #endif MainWindow.cpp: #include"MainWindow.h" #include<iostream> #include<gtkmm.h> MainWindow::MainWindow(BaseObjectType *cobject, const Glib::RefPtr<Gtk::Builder> &refGlade) :Gtk::Window(cobject), builder(refGlade) { builder->get_widget("buttonOpenFile", buttonOpenFile); builder->get_widget("buttonSave", buttonSave); builder->get_widget("buttonSaveAs", buttonSaveAs); builder->get_widget("buttonMenu", buttonMenu); builder->get_widget("openFileDialog", openFileDialog); builder->get_widget("dialogButtonOpen", dialogButonOpen); builder->get_widget("notebook", notebook); buttonOpenFile->signal_clicked().connect(sigc::mem_fun(*this, &MainWindow::OnButtonOpenFileClick)); dialogButonOpen->signal_clicked().connect(sigc::mem_fun(*this, &MainWindow::OnDialogButtonOpenClick)); buttonSave->signal_clicked().connect(sigc::mem_fun(*this, &MainWindow::AddTabToNotebook)); show_all_children(); } MainWindow::~MainWindow() {} void MainWindow::OnButtonOpenFileClick() { if(openFileDialog) { openFileDialog->show(); } } void MainWindow::OnDialogButtonOpenClick() { auto file=openFileDialog->get_file(); std::cout<<file->get_path(); if(openFileDialog) { openFileDialog->close(); } } void MainWindow::AddTabToNotebook() { Gtk::Box box; notebook->append_page(box); notebook->show_all(); } The glade file: <?xml version="1.0" encoding="UTF-8"?> <!-- Generated with glade 3.38.2 --> <interface> <requires lib="gtk+" version="3.24"/> <object class="GtkWindow" id="MainWindow"> <property name="name">MainWindow</property> <property name="width-request">800</property> <property name="height-request">600</property> <property name="can-focus">False</property> <child> <object class="GtkViewport"> <property name="visible">True</property> <property name="can-focus">False</property> <child> <object class="GtkBox"> <property name="visible">True</property> <property name="can-focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkBox"> <property name="height-request">30</property> <property name="visible">True</property> <property name="can-focus">False</property> <property name="valign">start</property> <property name="hexpand">True</property> <child> <object class="GtkButton" id="buttonOpenFile"> <property name="label" translatable="yes">Open file</property> <property name="name">buttonOpenFile</property> <property name="visible">True</property> <property name="can-focus">True</property> <property name="receives-default">True</property> <property name="tooltip-text" translatable="yes">Open a file</property> <signal name="clicked" handler="OnButtonOpenFileClicked" swapped="no"/> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkButton" id="buttonSave"> <property name="label" translatable="yes">Save changes</property> <property name="name">buttonSave</property> <property name="visible">True</property> <property name="can-focus">True</property> <property name="receives-default">True</property> <property name="tooltip-text" translatable="yes">Save changes to the current document</property> <signal name="clicked" handler="OnButtonSaveChangesClicked" swapped="no"/> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> <child> <object class="GtkButton" id="buttonSaveAs"> <property name="label" translatable="yes">Save As...</property> <property name="name">buttonSaveAs</property> <property name="visible">True</property> <property name="can-focus">True</property> <property name="receives-default">True</property> <signal name="clicked" handler="OnButtonSaveAsClicked" swapped="no"/> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">2</property> </packing> </child> <child> <object class="GtkMenuButton" id="buttonMenu"> <property name="name">buttonMenu</property> <property name="visible">True</property> <property name="can-focus">True</property> <property name="focus-on-click">False</property> <property name="receives-default">True</property> <property name="use-popover">False</property> <signal name="toggled" handler="OnButtonMenuToggled" swapped="no"/> <child> <placeholder/> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack-type">end</property> <property name="position">3</property> </packing> </child> <child> <placeholder/> </child> <child> <placeholder/> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkViewport"> <property name="height-request">780</property> <property name="visible">True</property> <property name="can-focus">False</property> <child> <object class="GtkScrolledWindow"> <property name="visible">True</property> <property name="can-focus">True</property> <property name="shadow-type">in</property> <child> <object class="GtkViewport"> <property name="visible">True</property> <property name="can-focus">False</property> <child> <object class="GtkNotebook" id="notebook"> <property name="name">notebook</property> <property name="width-request">200</property> <property name="height-request">200</property> <property name="visible">True</property> <property name="can-focus">True</property> <child> <object class="GtkTextView"> <property name="visible">True</property> <property name="can-focus">True</property> </object> </child> <child type="tab"> <object class="GtkLabel"> <property name="visible">True</property> <property name="can-focus">False</property> <property name="label" translatable="yes">page 1</property> </object> <packing> <property name="tab-fill">False</property> </packing> </child> <child> <placeholder/> </child> <child type="tab"> <placeholder/> </child> </object> </child> </object> </child> </object> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> <child> <placeholder/> </child> </object> </child> </object> </child> </object> <object class="GtkFileChooserDialog" id="openFileDialog"> <property name="name">openFileDialog</property> <property name="width-request">800</property> <property name="height-request">600</property> <property name="can-focus">False</property> <property name="type-hint">dialog</property> <child internal-child="vbox"> <object class="GtkBox"> <property name="width-request">800</property> <property name="height-request">600</property> <property name="can-focus">False</property> <property name="orientation">vertical</property> <property name="spacing">2</property> <child internal-child="action_area"> <object class="GtkButtonBox"> <property name="can-focus">False</property> <property name="layout-style">end</property> <child> <object class="GtkButton" id="dialogButtonOpen"> <property name="label" translatable="yes">Open</property> <property name="name">dialogButtonOpen</property> <property name="visible">True</property> <property name="can-focus">True</property> <property name="receives-default">True</property> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">False</property> <property name="position">0</property> </packing> </child> </object> </child> </object> </interface> Whatever I try to append to the notebook, be it a Gtk::Box or Gtk:TextView, the new tab doesn't show. Note: This is a work in progress, so the save button's clicked signal is responsible for adding a new tab(blank document)to the notebook.
How can I programmatically add a tab to a GTKmm notebook in C++?
The problem is that: The Gtk::Notebook does not take ownership of the widget you are adding. The widget you are adding is local to your callback (and hence destroyed when leaving it). I was able to make your example work by adding a widget (here a Gtk::Label) attribute to your window (so it outlives the callback): class MainWindow : public Gtk::Window { protected: Gtk::Button* buttonOpenFile; Gtk::Button* buttonSave; Gtk::Button* buttonSaveAs; Gtk::Label label; // <-- See here Gtk::Button *dialogButonOpen; ... and then show()ing it explicitly in the callback: void MainWindow::AddTabToNotebook() { notebook->append_page(label); label.set_text("test"); label.show(); } on Ubuntu and Gtkmm 3.24.20. You can achieve the same thing by using the Glade file, I leave this part to you.
76385362
76391913
I'm currently teaching myself how to use a headless CMS (CrafterCMS) with Next.js. I have the following simple content type in CrafterCMS studio (just a title and a text): And the respective code: export default async function TestPage() { const model = await getModel('/site/website/test/index.xml'); return ( <TestComponent model={model} /> ); } export default function TestComponent(props) { const { isAuthoring } = useCrafterAppContext(); const { model } = props; return ( <ExperienceBuilder path={model.craftercms.path} isAuthoring={isAuthoring}> <Model model={model} > <div className="space-y-4"> <RenderField model={model} fieldId="title_s" className="text-xl text-gray-800" /> <RenderField model={model} fieldId="text_t" className="text-xl text-gray-800" /> </div> </Model> </ExperienceBuilder> ); } Is it possible to use Studio functionality to drag and change the positions of my two fields? For example I want to drag the text to be the first element. It seems that i only have the option to change the content, dragging is not available:
Dragging fields in CrafterCMS Studio
On CrafterCMS, you can drag & drop mainly 3 things: Repeating group items Component item references from an Item Selector control with a "Components" datasource Media items into a media control (image or video) The simplest way of achieving what you describe, would be to change your model to wrap with a repeat group the fields you want to reorder via drag & drop (i.e. create a repeat group and add the "title" input control inside of it; only 1, discard the other). Once you've done that, you'll need to update your TestComponent code to use <RenderRepeat ... /> and you should be able to reorder via drag & drop, via the contextual menu up/down buttons, or via the content form. The rendering of the repeat with your title field, would roughly look something like this: <RenderRepeat model={model} fieldId="yourRepeatGroupId_o" componentProps={{ className: "space-y-4" }} renderItem={(item, index) => ( <RenderField model={model} fieldId="yourRepeatGroupId_o.title_s" index={index} className="text-xl text-gray-800" /> )} /> As I mentioned, you could achieve it via components (Item selector control, Components datasource), but the repeat group is simplest for this case; specially to get started and learn.
76394140
76394152
import random lst1 = [] lst2 = [] n = int(input('Step a: How many numbers in each list? ')) for i in range(0, n): number1 = random.randint(1, 15) number2 = random.randint(1, 15) lst1.append(number1) lst2.append(number2) count = 0 if lst1[i] > lst2[i]: count += 1 print(f'{number1} : First List') else: print(f'{number2} : Second List') print(f'Step b: First List{lst1}') print(f'Step c: Second List{lst2}') print('Step d:') print('Larger number in each comparison:') Program is supposed to make two lists and compare each element. Current program prints like this: Step a: How many numbers in each list? 5 12 : First List 14 : First List 14 : First List 15 : First List 5 : Second List Step b: First List\[12, 14, 14, 15, 4\] Step c: Second List\[5, 2, 8, 13, 5\] Step d: Larger number in each comparison: I'm trying to get it to print like this: Step a: How many numbers in each list? 5 Step b: First List\[12, 14, 14, 15, 4\] Step c: Second List\[5, 2, 8, 13, 5\] Step d: Larger number in each comparison: 12 : First List 14 : First List 14 : First List 15 : First List 5 : Second List Any suggestions? I've tried to make functions out of them and kept getting errors. Very new to this and not sure what I should do.
Python: Comparing two lists using a for-loop, how do I print the results correctly?
Try creating a third list to store the comparison results and then printing the contents only at the end with another loop: import random lst1 = [] lst2 = [] comparison_results = [] n = int(input('Step a: How many numbers in each list? ')) for i in range(n): number1 = random.randint(1, 15) number2 = random.randint(1, 15) lst1.append(number1) lst2.append(number2) if lst1[i] > lst2[i]: comparison_results.append(f'{number1} : First List') elif lst1[i] < lst2[i]: comparison_results.append(f'{number2} : Second List') else: comparison_results.append(f'{number2} : In Both Lists') print(f'Step b: First List {lst1}') print(f'Step c: Second List {lst2}') print('Step d: Larger number in each comparison:') for result in comparison_results: print(result) Example Output: Step a: How many numbers in each list? 5 Step b: First List [5, 11, 9, 3, 4] Step c: Second List [15, 11, 9, 13, 15] Step d: Larger number in each comparison: 15 : Second List 11 : In Both Lists 9 : In Both Lists 13 : Second List 15 : Second List
76391153
76392845
Been experimenting with polars and of the key features that peak my interest is the larger than RAM operations. I downloaded some files to play with from HERE. On the website: First line in each file is header; 1 line corresponds to 1 record.. WARNING total download is quite large (~1.3GB)! This experiment was done on AWS server (t2.medium, 2cpu, 4GB) wget https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Shoes_v1_00.tsv.gz \ https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Office_Products_v1_00.tsv.gz \ https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Software_v1_00.tsv.gz \ https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv .gz \ https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Watches_v1_00.tsv.gz gunzip * Here are the results from wc -l drwxrwxr-x 3 ubuntu ubuntu 4096 Jun 2 12:44 ../ -rw-rw-r-- 1 ubuntu ubuntu 1243069057 Nov 25 2017 amazon_reviews_us_Office_Products_v1_00.tsv -rw-rw-r-- 1 ubuntu ubuntu 44891575 Nov 25 2017 amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv -rw-rw-r-- 1 ubuntu ubuntu 1570176560 Nov 25 2017 amazon_reviews_us_Shoes_v1_00.tsv -rw-rw-r-- 1 ubuntu ubuntu 249565371 Nov 25 2017 amazon_reviews_us_Software_v1_00.tsv -rw-rw-r-- 1 ubuntu ubuntu 412542975 Nov 25 2017 amazon_reviews_us_Watches_v1_00.tsv $ find . -type f -exec cat {} + | wc -l 8398139 $ find . -name '*.tsv' | xargs wc -l 2642435 ./amazon_reviews_us_Office_Products_v1_00.tsv 341932 ./amazon_reviews_us_Software_v1_00.tsv 85982 ./amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv 4366917 ./amazon_reviews_us_Shoes_v1_00.tsv 960873 ./amazon_reviews_us_Watches_v1_00.tsv 8398139 total Now, if I count the rows using polars using our new fancy lazy function: import polars as pl csvfile = "~/data/amazon/*.tsv" ( pl.scan_csv(csvfile, separator = '\t') .select( pl.count() ) .collect() ) shape: (1, 1) ┌─────────┐ │ count │ │ --- │ │ u32 │ ╞═════════╡ │ 4186305 │ └─────────┘ Wow, thats a BIG difference between wc -l and polars. Thats weird... maybe its a data issue. Lets only focus on the column of interest: csvfile = "~/data/amazon/*.tsv" ( ... pl.scan_csv(csvfile, separator = '\t') ... .select( ... pl.col("product_category").count() ... ) ... .collect() ... ) shape: (1, 1) ┌──────────────────┐ │ product_category │ │ --- │ │ u32 │ ╞══════════════════╡ │ 7126095 │ └──────────────────┘ And with .collect(streaming = True): shape: (1, 1) ┌──────────────────┐ │ product_category │ │ --- │ │ u32 │ ╞══════════════════╡ │ 7125569 │ └──────────────────┘ Ok, still a difference of about 1 million? Lets do it bottom up: csvfile = "~/data/amazon/*.tsv" ( pl.scan_csv(csvfile, separator = '\t') .groupby(["product_category"]) .agg(pl.col("product_category").count().alias("counts")) .collect(streaming = True) .filter(pl.col('counts') > 100) .sort(pl.col("counts"), descending = True) .select( pl.col('counts').sum() ) ) shape: (1, 1) ┌─────────┐ │ counts │ │ --- │ │ u32 │ ╞═════════╡ │ 7125553 │ └─────────┘ Close, albeit that its once again a different count... Some more checks using R: library(vroom) library(purrr) library(glue) library(logger) amazon <- list.files("~/data/amazon/", full.names = TRUE) f <- function(file){ df <- vroom(file, col_select = 'product_category', show_col_types=FALSE ) log_info(glue("File [{basename(file)}] has [{nrow(df)}] rows")) } walk(amazon, f) INFO [2023-06-02 14:23:40] File [amazon_reviews_us_Office_Products_v1_00.tsv] has [2633651] rows INFO [2023-06-02 14:23:41] File [amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv] has [85898] rows INFO [2023-06-02 14:24:06] File [amazon_reviews_us_Shoes_v1_00.tsv] has [4353998] rows INFO [2023-06-02 14:24:30] File [amazon_reviews_us_Software_v1_00.tsv] has [331152] rows INFO [2023-06-02 14:24:37] File [amazon_reviews_us_Watches_v1_00.tsv] has [943763] rows Total: 8348462 Ok. Screw it. Basically a random number generating exercise and nothing is real. Surely if its a data hygiene issue the error should be constant? Any idea why there might be such a large discrepancy?
Python Polars: Lazy Frame Row Count not equal wc -l
It's usually helpful to declare the size of downloads in cases like this. For any readers, the total size is 1.3 GB The smallest file is https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv.gz at 17.6 MB I tried pandas to debug this, it cannot read any of these files: pd.read_csv('amazon-reviews/amazon_reviews_us_Personal_Care_Appliances_v1_00.tsv', sep='\t') ParserError: Error tokenizing data. C error: Expected 15 fields in line 1598, saw 22 Line 1598: US 3878437 R3BH4UXFRP6F8L B00J7G8EL0 381088677 GUM Expanding Floss - 30 m - 2 pk Personal_Care_Appliances 4 0 0 N Y " like the REACH woven that's no longer available--THAT was the wish it was a bit &#34;fluffier,&#34; like the REACH woven that's no longer available--THAT was the best 2015-08-06 The issue is the single " character, you need to disable the default quoting behaviour. With that change I get a total count of 8398134 each time. polars (pl.scan_csv('amazon-reviews/*.tsv', separator='\t', quote_char=None) .select(pl.count()) .collect() ) CPU times: user 3.65 s, sys: 2.02 s, total: 5.67 s Wall time: 2.48 s shape: (1, 1) ┌─────────┐ │ count │ │ --- │ │ u32 │ ╞═════════╡ │ 8398134 │ └─────────┘ pandas sum( len(pd.read_csv(file, sep='\t', quoting=3).index) for file in files ) CPU times: user 57.6 s, sys: 9.78 s, total: 1min 7s Wall time: 1min 7s 8398134 duckdb duckdb.sql(""" from read_csv_auto('amazon-reviews/*.tsv', sep='\t', quote='') select count(*) """).pl() CPU times: user 12.4 s, sys: 2.32 s, total: 14.7 s Wall time: 5.05 s shape: (1, 1) ┌──────────────┐ │ count_star() │ │ --- │ │ i64 │ ╞══════════════╡ │ 8398134 │ └──────────────┘ pyarrow parse_options = pyarrow.csv.ParseOptions(delimiter='\t', quote_char=False) sum( pyarrow.csv.read_csv(file, parse_options=parse_options).num_rows for file in files ) CPU times: user 12.9 s, sys: 6.46 s, total: 19.4 s Wall time: 6.65 s 8398134
76389541
76391919
I'm developing Google Apps Script locally. I used the clasp push --watch command for pushing updated code to the original Apps Script as per the documentation. https://www.npmjs.com/package/@google/clasp#push The problem is that when I update the code of any one file, it uploads the whole file. I have attached the screenshot of the file structure and vs code terminal. You can see there I had updated the code of home.html but still, it's uploaded the whole project. Because of this, we are getting time-consuming issues. It's taking a long time to upload the project. I had also attached the screen shot of clasp.json file. Is there any way to upload the only updated file ?
How to upload only updated files in Google Apps Script using clasp?
It seems this behavior is expected based on the documentation: Warning: Google scripts APIs do not currently support atomic nor per file operations. Thus the push command always replaces the whole content of the online project with the files being pushed. clasp push replaces code that is on script.google.com and clasp pull replaces all files locally. For this reason, follow these guidelines: Do not concurrently edit code locally and on script.google.com. Use a version control system, like git. A new feature would be required from the Google Apps Script API to accomplish what you're looking for, it seems there is already a Feature Request here, you can vote for it, add your comments and look for any future updates. References Pulling & Pushing Files Feature request: clasp push -w - add option to push only changed files Problems when using CLASP to push client-side files from a local editor
76389283
76391945
private void ProcessedImage() { try { if (FileUpload1.HasFile) { int length = 192; int width = 192; using (Bitmap sourceImage = new Bitmap(FileUpload1.PostedFile.InputStream)) { using (Bitmap resizedImage = new Bitmap(length, width)) { using (Graphics graphics = Graphics.FromImage(resizedImage)) { graphics.InterpolationMode = InterpolationMode.HighQualityBicubic; graphics.SmoothingMode = SmoothingMode.HighQuality; graphics.PixelOffsetMode = PixelOffsetMode.HighQuality; graphics.DrawImage(sourceImage, 0, 0, length, width); } string resizedImagePath = Server.MapPath("~/Images/Image.png"); resizedImage.Save(resizedImagePath, ImageFormat.Png); ImgPhoto.ImageUrl = "~/Images/Image.png"; } } } } catch (Exception ex) { string errorMessage = ("An error occurred " + ex.Message); } } public void Save() { try { byte[] imageData; using (MemoryStream ms = new MemoryStream()) { using (Bitmap bitmap = new Bitmap(Server.MapPath("~/Images/finalImage.png"))) { bitmap.Save(ms, ImageFormat.Png); imageData = ms.ToArray(); } } using (SqlConnection con = new SqlConnection("Data Source=127.0.0.1;Initial Catalog=Karthik;User ID=admin;Password=admin")) { con.Open(); SqlCommand cmd = new SqlCommand("INSERT INTO image_tbl (ImageID,image_data) VALUES (@ImageID,@image_data)", con); cmd.Parameters.AddWithValue("@ImageID", ImageID.Text.Trim()); cmd.Parameters.AddWithValue("@image_data", imageData); cmd.ExecuteNonQuery(); Response.Write("<script>alert('Saved Succefully')</script>"); } } catch (Exception ex) { string errorMessage = "An error occurred: " + ex.Message; } } This is my code. I resized the image and saved it in the database. Now I want to take an input from the user for Image ID and retrieve the image corresponding to the image ID and show it in the asp text box. Is it possible to do so? Note that I am working in Visual Studio 2010. And I don't think it supports JavaScript codes.
How to retrieve an image from database and show it in asp image box with a click of a button?
Ok, first up, even vs2010 used lots of JavaScript, and in fact came with a bootstrap main menu, had the JavaScript "helper" library jQuery installed. And in fact, YOUR above code even uses JavaScript here: Response.Write("<script>alert('Saved Succefully')</script>"); So, to be clear? You have full use of JavaScript, and there is ZERO issues in regards to enjoying the use of JavaScript, and nothing stops you from using JavaScript. Now, having stated the above, there is no need to use JavaScript, and you don't have to write any JavaScript to solve your question. But, to be crystal clear, you have full use of JavaScript in your markup, and even projects created in vs2010 have FULL support for using JavaScript. Ok, so, lets assume we have that saved image, as you note nice and small, and you want to show the preview. About the ONLY real issue? How many images do you need to display on a page. Why do I ask "how many" images? Answer: Because I going to post a VERY easy bit of code to display that image from the database, but this approach NEEDS MUCH caution, since we going to stream the image to the page as raw image bytes, and this approach means the "image" will travel back to the server when you click on a button, or ANY thing on that page that triggers a post-back (standard round trip) of the browser will "increase" the size and payload of this web page. So, for a smaller image or thumbnail type of image display, and only a few on the page, then this approach is acceptable, and VERY easy to code. However, if there are to be many images, or the image is large, then I do NOT recommend this "easy" and "simple" code approach. As noted, since we only are to display one image, and since it is small, then we don't care. However, if your goal was to display MANY images on a page, then I do not recommend this approach for more then a few images displayed at the same time on a single page. Ok, I don' have your data, but this code shows how you can do this: I have a sql server table, and one of the columns is raw byte data of the saved image. The OTHER issue is you NEED to know what file extension your picture was in the first place. For this example, it looks like .png, but if you are to allow different kinds of images, then you MUST save the file extension, or even better yet save the so called "mine" type of the image. This information allows us to tell the browser what kind of image we want to display. So, lets say we fill a dropdown list with the rows of the database (without the column that holds the data). So, we have this markup: <h3>Select Fighter</h3> <asp:DropDownList ID="cboFighters" runat="server" DataValueField="ID" DataTextField="Fighter" AutoPostBack="true" OnSelectedIndexChanged="cboFighters_SelectedIndexChanged" Width="250px"> </asp:DropDownList> <br /> <div class="mybox" style="float: left"> <div style="text-align: center; padding: 2px 10px 12px 10px"> <h3 id="Fighter" runat="server"></h3> <asp:Image ID="Image2" runat="server" Width="180" Height="120" /> <h4>Engine</h4> <asp:Label ID="EngineLabel" runat="server" Text="" /> <h4>Description</h4> <asp:Label ID="DescLabel" runat="server" Width="400px" Text="" Style="text-align: left" Font-Size="Large" /> </div> </div> Note close in above, there is a image control. So, now our code behind is this: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { cboFighters.DataSource = General.MyRst("SELECT ID, Fighter FROM Fighters"); cboFighters.DataBind(); cboFighters.Items.Insert(0, new ListItem("Select Fighter", "0")); } } protected void cboFighters_SelectedIndexChanged(object sender, EventArgs e) { SqlCommand cmdSQL = new SqlCommand("SELECT * FROM Fighters WHERE ID = @ID"); cmdSQL.Parameters.Add("@ID", SqlDbType.Int).Value = cboFighters.SelectedItem.Value; DataRow OneFighter = General.MyRstP(cmdSQL).Rows[0]; Fighter.InnerText = OneFighter["Fighter"].ToString(); EngineLabel.Text = OneFighter["Engine"].ToString(); DescLabel.Text = OneFighter["Description"].ToString(); string sMineType = OneFighter["MineType"].ToString(); // stringSmineType = "image/png" // hard coded value if all are .png byte[] MyBytePic = (byte[])OneFighter["MyImage"]; Image2.ImageUrl = $@"data:{sMineType};base64,{Convert.ToBase64String(MyBytePic)}"; } And the result is this: Note the use of "mine mapping". Quite sure that needs .net 4.5 or later. However, if all of your raw byte saved images in the database are .png, then you can hard code as the above commented out line shows. Note that the above same trick works if we fill out a gridview. However, as I stated, use "some" caution with this approach, since the image does not have any "real" URL path name to resolve, the browser will wind up sending the picture back to the server with each button click (and post back). Since there is no "real" URL, then the browser cannot cache such pictures. As noted, you can also consider creating a http handler, and they play much nicer then sending the picture as a base64 string. So, if your concerned about keeping the page size small, or to leverage browser caching, then consider a custom picture handler. However, since in your case we have a small picture of 192x192, then the above approach is fine and easy to code.
76394222
76394267
I am new to java , how do I compile packages in Intelij This is my directory It is running perfectly in Main.java To compile , I first ran javac people/People.java Then I compile javac Main.java, it returns Main.java:8: error: package com.amigoscode.people does not exist Main.java package com.amigoscode; // Press Shift twice to open the Search Everywhere dialog and type `show whitespaces`, // then press Enter. You can now see whitespace characters in your code. import java.util.Scanner; import com.amigoscode.people.People; public class Main { public static void main(String[] args) { People p = new People(); System.out.println(p.fullName()); } } People.java package com.amigoscode.people; public class People { public String fname = "John"; public String lname = "Doe"; public int age = 24; public String fullName() { String fullName = fname + " " + lname; return fullName; } } Thank you very much.
How to compile Packages in java
First, navigate to the src directory using terminal. Then execute the command below to compile the People.java. javac com/amigoscode/people/People.java Now, to compile the Main.java execute the command below. javac com/amigoscode/Main.java This should work fine.
76389911
76391974
I'm using CMake in, so far as I can tell, the same way as I always use it. The problem is that now include_directories($(CMAKE_SOURCE_DIR)/server/include) results in my header files not being found. When instead I use include_directories($(CMAKE_CURRENT_SOURCE_DIR)/server/include) they are found. i.e. $(CMAKE_CURRENT_SOURCE_DIR) works, and $(CMAKE_SOURCE_DIR) doesn't. The bizarre thing is that both of: message(${CMAKE_SOURCE_DIR}/server/include) message(${CMAKE_CURRENT_SOURCE_DIR}/server/include) give: /home/username/projects/repo/server/include So why does one work while the other doesn't?
CMake CMAKE_SOURCE_DIR and CMAKE_CURRENT_SOURCE_DIR to set same path, but only latter 'works'
$(FOO) is not the correct syntax to refer to the cmake variable FOO. You need to use ${FOO} instead. Neither of the uses of include_directories should work, unless the build system interprets the path $(CMAKE_CURRENT_SOURCE_DIR)/server/include. Note that you're using the proper syntax for the message commands. The message command message($(CMAKE_CURRENT_SOURCE_DIR)/server/include) prints $(CMAKE_CURRENT_SOURCE_DIR)/server/include demonstrating the issue. I recommend using the CMAKE_CURRENT_SOURCE_DIR instead of CMAKE_SOURCE_DIR btw in all cases except for rare exceptions where you know that the toplevel CMakeLists.txt is located in a directory with specific contents. The former version is easier to recombine when used as part of a larger project via add_subdirectory. include_directories(${CMAKE_CURRENT_SOURCE_DIR}/server/include) # ^ ^ Note that the use of target_include_directories usually is preferrable to the use of include_directories, since the former allows restricts the targets the include directories are applied to to the one specifically mentioned. It also would allow you to make the include directories to targets linking a cmake library target, if desired.
76392241
76392975
I'm a beginner programmer Rails. I'll start with a problem: I'm using Devise to work with users, and I tried to enable mail confirmation. It doesn't work, unfortunately. If possible, please help! My error: Net::SMTPAuthenticationError in Devise::ConfirmationsController#create 535-5.7.8 Username and Password not accepted. Learn more at My config/environments/develorment.rb: config.action_mailer.perform_deliveries = true config.action_mailer.raise_delivery_errors = true config.action_mailer.perform_caching = false config.action_mailer.default_url_options = { host: 'localhost', port: 3000 } config.action_mailer.delivery_method = :smtp config.action_mailer.smtp_settings = { address: "smtp.gmail.com", port: 587, authentication: "plain", enable_starttls_auto: true, user_name: "[email protected]", password: "xyz", domain: "gmail.com", openssl_verify_mode: "none", } My config/initializers/devise.rb config.mailer_sender = "[email protected]" config.mailer = 'Devise::Mailer' My server starts with configurations: => Booting Puma => Rails 7.0.5 application starting in development => Run `bin/rails server --help` for more startup options Puma starting in single mode... * Puma version: 5.6.5 (ruby 3.2.0-p0) ("Birdie's Version") * Min threads: 5 * Max threads: 5 * Environment: development * PID: 6074 * Listening on http://127.0.0.1:3000 * Listening on http://[::1]:3000 Use Ctrl-C to stop And Errors in files (or what is it called): net-smtp (0.3.3) lib/net/smtp.rb:1088:in `check_auth_response' net-smtp (0.3.3) lib/net/smtp.rb:845:in `auth_plain' net-smtp (0.3.3) lib/net/smtp.rb:837:in `public_send' net-smtp (0.3.3) lib/net/smtp.rb:837:in `authenticate' net-smtp (0.3.3) lib/net/smtp.rb:670:in `do_start' net-smtp (0.3.3) lib/net/smtp.rb:611:in `start' mail (2.8.1) lib/mail/network/delivery_methods/smtp.rb:109:in `start_smtp_session' mail (2.8.1) lib/mail/network/delivery_methods/smtp.rb:100:in `deliver!' mail (2.8.1) lib/mail/message.rb:2145:in `do_delivery' mail (2.8.1) lib/mail/message.rb:253:in `block in deliver' actionmailer (7.0.5) lib/action_mailer/base.rb:588:in `block in deliver_mail' activesupport (7.0.5) lib/active_support/notifications.rb:206:in `block in instrument' activesupport (7.0.5) lib/active_support/notifications/instrumenter.rb:24:in `instrument' activesupport (7.0.5) lib/active_support/notifications.rb:206:in `instrument' actionmailer (7.0.5) lib/action_mailer/base.rb:586:in `deliver_mail' mail (2.8.1) lib/mail/message.rb:253:in `deliver' actionmailer (7.0.5) lib/action_mailer/message_delivery.rb:119:in `block in deliver_now' actionmailer (7.0.5) lib/action_mailer/rescuable.rb:17:in `handle_exceptions' actionmailer (7.0.5) lib/action_mailer/message_delivery.rb:118:in `deliver_now' devise (4.9.2) lib/devise/models/authenticatable.rb:204:in `send_devise_notification' devise (4.9.2) lib/devise/models/confirmable.rb:121:in `send_confirmation_instructions' devise (4.9.2) lib/devise/models/confirmable.rb:136:in `block in resend_confirmation_instructions' devise (4.9.2) lib/devise/models/confirmable.rb:239:in `pending_any_confirmation' devise (4.9.2) lib/devise/models/confirmable.rb:135:in `resend_confirmation_instructions' devise (4.9.2) lib/devise/models/confirmable.rb:321:in `send_confirmation_instructions' devise (4.9.2) app/controllers/devise/confirmations_controller.rb:11:in `create' actionpack (7.0.5) lib/action_controller/metal/basic_implicit_render.rb:6:in `send_action' actionpack (7.0.5) lib/abstract_controller/base.rb:215:in `process_action' actionpack (7.0.5) lib/action_controller/metal/rendering.rb:165:in `process_action' actionpack (7.0.5) lib/abstract_controller/callbacks.rb:234:in `block in process_action' activesupport (7.0.5) lib/active_support/callbacks.rb:118:in `block in run_callbacks' actiontext (7.0.5) lib/action_text/rendering.rb:20:in `with_renderer' actiontext (7.0.5) lib/action_text/engine.rb:69:in `block (4 levels) in <class:Engine>' activesupport (7.0.5) lib/active_support/callbacks.rb:127:in `instance_exec' activesupport (7.0.5) lib/active_support/callbacks.rb:127:in `block in run_callbacks' activesupport (7.0.5) lib/active_support/callbacks.rb:138:in `run_callbacks' actionpack (7.0.5) lib/abstract_controller/callbacks.rb:233:in `process_action' actionpack (7.0.5) lib/action_controller/metal/rescue.rb:22:in `process_action' actionpack (7.0.5) lib/action_controller/metal/instrumentation.rb:67:in `block in process_action' activesupport (7.0.5) lib/active_support/notifications.rb:206:in `block in instrument' activesupport (7.0.5) lib/active_support/notifications/instrumenter.rb:24:in `instrument' activesupport (7.0.5) lib/active_support/notifications.rb:206:in `instrument' actionpack (7.0.5) lib/action_controller/metal/instrumentation.rb:66:in `process_action' actionpack (7.0.5) lib/action_controller/metal/params_wrapper.rb:259:in `process_action' activerecord (7.0.5) lib/active_record/railties/controller_runtime.rb:27:in `process_action' actionpack (7.0.5) lib/abstract_controller/base.rb:151:in `process' actionview (7.0.5) lib/action_view/rendering.rb:39:in `process' actionpack (7.0.5) lib/action_controller/metal.rb:188:in `dispatch' actionpack (7.0.5) lib/action_controller/metal.rb:251:in `dispatch' actionpack (7.0.5) lib/action_dispatch/routing/route_set.rb:49:in `dispatch' actionpack (7.0.5) lib/action_dispatch/routing/route_set.rb:32:in `serve' actionpack (7.0.5) lib/action_dispatch/routing/mapper.rb:18:in `block in <class:Constraints>' actionpack (7.0.5) lib/action_dispatch/routing/mapper.rb:48:in `serve' actionpack (7.0.5) lib/action_dispatch/journey/router.rb:50:in `block in serve' actionpack (7.0.5) lib/action_dispatch/journey/router.rb:32:in `each' actionpack (7.0.5) lib/action_dispatch/journey/router.rb:32:in `serve' actionpack (7.0.5) lib/action_dispatch/routing/route_set.rb:852:in `call' warden (1.2.9) lib/warden/manager.rb:36:in `block in call' warden (1.2.9) lib/warden/manager.rb:34:in `catch' warden (1.2.9) lib/warden/manager.rb:34:in `call' rack (2.2.7) lib/rack/tempfile_reaper.rb:15:in `call' rack (2.2.7) lib/rack/etag.rb:27:in `call' rack (2.2.7) lib/rack/conditional_get.rb:40:in `call' rack (2.2.7) lib/rack/head.rb:12:in `call' actionpack (7.0.5) lib/action_dispatch/http/permissions_policy.rb:38:in `call' actionpack (7.0.5) lib/action_dispatch/http/content_security_policy.rb:36:in `call' rack (2.2.7) lib/rack/session/abstract/id.rb:266:in `context' rack (2.2.7) lib/rack/session/abstract/id.rb:260:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/cookies.rb:704:in `call' activerecord (7.0.5) lib/active_record/migration.rb:603:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/callbacks.rb:27:in `block in call' activesupport (7.0.5) lib/active_support/callbacks.rb:99:in `run_callbacks' actionpack (7.0.5) lib/action_dispatch/middleware/callbacks.rb:26:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/executor.rb:14:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/actionable_exceptions.rb:17:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/debug_exceptions.rb:28:in `call' web-console (4.2.0) lib/web_console/middleware.rb:132:in `call_app' web-console (4.2.0) lib/web_console/middleware.rb:28:in `block in call' web-console (4.2.0) lib/web_console/middleware.rb:17:in `catch' web-console (4.2.0) lib/web_console/middleware.rb:17:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/show_exceptions.rb:26:in `call' railties (7.0.5) lib/rails/rack/logger.rb:40:in `call_app' railties (7.0.5) lib/rails/rack/logger.rb:25:in `block in call' activesupport (7.0.5) lib/active_support/tagged_logging.rb:99:in `block in tagged' activesupport (7.0.5) lib/active_support/tagged_logging.rb:37:in `tagged' activesupport (7.0.5) lib/active_support/tagged_logging.rb:99:in `tagged' railties (7.0.5) lib/rails/rack/logger.rb:25:in `call' sprockets-rails (3.4.2) lib/sprockets/rails/quiet_assets.rb:13:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/remote_ip.rb:93:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/request_id.rb:26:in `call' rack (2.2.7) lib/rack/method_override.rb:24:in `call' rack (2.2.7) lib/rack/runtime.rb:22:in `call' activesupport (7.0.5) lib/active_support/cache/strategy/local_cache_middleware.rb:29:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/server_timing.rb:61:in `block in call' actionpack (7.0.5) lib/action_dispatch/middleware/server_timing.rb:26:in `collect_events' actionpack (7.0.5) lib/action_dispatch/middleware/server_timing.rb:60:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/executor.rb:14:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/static.rb:23:in `call' rack (2.2.7) lib/rack/sendfile.rb:110:in `call' actionpack (7.0.5) lib/action_dispatch/middleware/host_authorization.rb:137:in `call' railties (7.0.5) lib/rails/engine.rb:530:in `call' puma (5.6.5) lib/puma/configuration.rb:252:in `call' puma (5.6.5) lib/puma/request.rb:77:in `block in handle_request' puma (5.6.5) lib/puma/thread_pool.rb:340:in `with_force_shutdown' puma (5.6.5) lib/puma/request.rb:76:in `handle_request' puma (5.6.5) lib/puma/server.rb:443:in `process_client' puma (5.6.5) lib/puma/thread_pool.rb:147:in `block in spawn_thread' The message is shown in the terminal. And yes, I know it's important, e-mail exists, and the data is taken from it. I tried many options, including the part of the code that I ended up with. Nothing helps.
How to fix '535-5.7.8 Username and Password not accepted. Learn more at' error in Devise mail confirmation?
This is related to ActionMailer configuration rather than Devise. From the documentation authentication should be set to one of the following :plain, :login or :cram_md5 The following is an excerpt taken from the relevant section in the documentation linked to above and is the explanation of the available options and what they do, so it all depends on your mail server as to which you wish to use however you can see that SSL is not amongst the available options :authentication - If your mail server requires authentication, you need to specify the authentication type here. This is a symbol and one of :plain (will send the password in the clear), :login (will send password Base64 encoded) or :cram_md5 (combines a Challenge/Response mechanism to exchange information and a cryptographic Message Digest 5 algorithm to hash important information) As requested, a typical gmail configuration would be to use SMTP like so config.action_mailer.delivery_method = :smtp config.action_mailer.smtp_settings = { address: 'smtp.gmail.com', port: 587, domain: 'example.com', user_name: '<username>', password: '<password>', authentication: 'plain', enable_starttls_auto: true, open_timeout: 5, read_timeout: 5 } Taken from the rails action mailer documentation section 5.2 Make sure to replace username x's and password x's with your credentials and NEVER post your credentials anywhere public, I suggest you amend your password on your google account immediately, I have edited your question to hide your password but it is still available to view by those with the correct privileges
76389742
76392034
I have two keys to different github account and there is the config ~/.gitconfig: [user] name = example email = [email protected] [pull] rebase = true [rebase] autoStash = true [filter "lfs"] clean = git-lfs clean -- %f smudge = git-lfs smudge -- %f process = git-lfs filter-process required = true [includeIf "gitdir/i:/Users/example/Documents/github_2/"] [core] sshCommand = "ssh -i ~/.ssh/github2_key" then, I go to a folder that did not at /Users/example/Documents/github_2/, and run git clone github1_private_project, git told me Please make sure you have the correct access rights It works to git clone the project from account 'github2' in /Users/example/Documents/github_2/ git version: % git -v git version 2.39.2 (Apple Git-143)
why .gitconfig [includIf] override the default config
The syntax for includeIf should be (see the docs): [includeIf "gitdir/i:/Users/example/Documents/github_2/"] path = </path/to/includeFile> The syntax [includeIf "gitdir/i:/Users/example/Documents/github_2/"] [core] sshCommand = "ssh -i ~/.ssh/github2_key" actually means [includeIf "gitdir/i:/Users/example/Documents/github_2/"] # No `path` hence no include [core] sshCommand = "ssh -i ~/.ssh/github2_key" where the key core.sshCommand is always (unconditionally) defined.
76385174
76392131
I have a tricky TypeScript question. Let say I have this Icon component with the prop size. Size can be "2", "4", "6". I map these values to predefined tailwind classes. So I type it like type SizeValues = '2' | '4' | '6'; function Icon({size = '4'}: {size: SizeValues}) { const sizeMap = { '2': 'w-2 h-2', '4': 'w-4 h-4', '6': 'w-6 h-6', }; return <span className={sizeMap[size]}>My icon goes here</span> } <Icon size="sm" /> Everything is fine. But what if I wanna have different sizes depending on what screen size I have. So I wanna try to have like tailwinds nice syntax. So I rewrite my Icon component to following: type SizeValues = ??? function Icon({size = '4'}: {size: SizeValues}) { const sizeMap = { '2': 'w-2 h-2', '4': 'w-4 h-4', '6': 'w-6 h-6', 'md:2': 'md:w-2 md:h-2', 'md:4': 'md:w-4 md:h-4', 'md:6': 'md:w-6 md:h-6', 'lg:2': 'lg:w-2 lg:h-2', 'lg:4': 'lg:w-4 lg:h-4', 'lg:6': 'lg:w-6 lg:h-6', }; return <span className={size.split(' ').map(s => sizeMap[s]).join(' ').trim()}>My icon goes here</span> } <Icon size="2 md:4 lg:6" /> That works fine, but how do I type it? I read TypeScript will support regex in the future. That will make it easier, but is it possible to type this now? This is not a real component so please don't give me awesome suggestions how I can improve it. I just wanna know how I can type my size prop so it works the way I've coded it.
How can I define a type in TypeScript that's a string that only should contain words from a predefined list
First, we need to extract sizeMap into the global scope, and const assert it to let the compiler know that this is immutable constant and restrict it from widening types: const sizeMap = { '2': 'w-2 h-2', '4': 'w-4 h-4', '6': 'w-6 h-6', 'md:2': 'md:w-2 md:h-2', 'md:4': 'md:w-4 md:h-4', 'md:6': 'md:w-6 md:h-6', 'lg:2': 'lg:w-2 lg:h-2', 'lg:4': 'lg:w-4 lg:h-4', 'lg:6': 'lg:w-6 lg:h-6', } as const; Next, we need to get a type for the keys of the sizeMap: type SizeMap = typeof sizeMap; type SizeMapKeys = keyof SizeMap; Implementation: We will create a type that accepts a string and returns it if it is valid; otherwise, return never. Pseudo-code: Let type accept T - string to validate, Original - original string, AlreadyUsed - union of already used keys. If T is an empty string return Original Else if T starts with keys of the size map (ClassName), excluding AlreadyUsed, followed by a space and the remaining string(Rest). Recursively call this type, passing Rest as a string to validate Original, and the AlreadyUsed with ClassName added to it. Else if T is the key of the size map excluding AlreadyUsed return Original else return never Realization: type _SizeValue< T extends string, Original extends string = T, AlreadyUsed extends string = never > = T extends "" ? Original : T extends `${infer ClassName extends Exclude< SizeMapKeys, AlreadyUsed >} ${infer Rest extends string}` ? _SizeValue<Rest, Original, AlreadyUsed | ClassName> : T extends Exclude<SizeMapKeys, AlreadyUsed> ? Original : never; We have to add a generic parameter to Item that will represent the size. function Icon<T extends string | undefined>({ size, }: { size: _SizeValue<T>; }) { return null; } Since, size is optional in the component, we will add a wrapper around the SizeValue which will turn string | undefined to string and pass it to _SizeValue, additionally we will add a default value for size: type SizeValue<T extends string | undefined> = _SizeValue<NonNullable<T>>; function Icon<T extends string | undefined>({ size = "2", }: { size?: SizeValue<T> | "2"; }) { return null; } Usage: <Icon size="2" />; <Icon size="md:2" />; <Icon size="md:2 md:6" />; <Icon size="md:2 md:6 lg:6" />; // expected error <Icon size="md:2 md:6 lg:5" />; // no duplicates allowed <Icon size="2 2" />; playground
76394256
76394286
I've never actually ran into this problem before, at least not that I'm aware of... But I'm working on some SIMD vector optimizations in some of my code and I'm having some alignment issues. Here's some minimal code that I've been able to reproduce the problem with, on MSVC (Visual Studio 2022): #include <stdio.h> #include <stdint.h> #include <stdbool.h> #include <stdlib.h> #include <string.h> #include <xmmintrin.h> _declspec(align(16)) typedef union { struct { float x, y, z; }; #if 0 // This works: float v[4]; #else // This does not: __m128 v; #endif } vec; typedef struct { vec pos; vec vel; float radius; } particle; int main(int argc, char **argv) { particle *particles=malloc(sizeof(particle)*10); if(particles==NULL) return -1; // intentionally misalign the pointer ((uint8_t *)particles)+=3; printf("misalignment: %lld\n", (uintptr_t)particles%16); particles[0].pos=(vec){ 1.0f, 2.0f, 3.0f }; particles[0].vel=(vec){ 4.0f, 5.0f, 6.0f }; printf("pos: %f %f %f\nvel: %f %f %f\n", particles[0].pos.x, particles[0].pos.y, particles[0].pos.z, particles[0].vel.x, particles[0].vel.y, particles[0].vel.z); return 0; } I don't understand why a union with float x/y/z and float[4] works with misaligned memory addresses, but a union with the float x/y/z and an __m128 generates an access violation. I get that the __m128 type has some extra alignment specs on it, but the overall union size doesn't change and it's also 16 byte aligned anyway, so why does it matter? I do understand the importance of memory alignment, but the extra weird part is that I added in an aligned_malloc to my code that's allocating the offending misaligned memory (I use a slab/zone memory allocator in my code) and it still continued to crash out with an access violation, which further adds to my hair loss.
Why does __m128 cause alignment issues in a union with float x/y/z?
alignof(your_union) is 16 when it includes a __m128 member, so compilers will use movaps or movdqa because you've promised them that the data is aligned. Otherwise alignof(your_union) is only 4 (inherited from float, so they'll use movups or movdqu which has no alignment requirement. It's still alignment undefined behaviour, as gcc -fsanitize=undefined will tell you, since you're using an address that's not even aligned by 4. https://godbolt.org/z/6GxebxT7r shows MSVC is using movdqa stores for your code, like movdqa [rbx+19], xmm2 where RBX holds a malloc return value. This is guaranteed to fault because malloc return values are aligned by alignof(max_align_t), which is definitely an even number and usually 16 in x86-64. Often MSVC will only use unaligned movdqu / movups loads/stores even when you use _mm_store_ps. (But alignment-required intrinsics will let it fold the load into a memory source operand for non-AVX instructions like addps xmm0, [rcx]). But apparently MSVC treats aggregates differently from deref of a __m128*. So your type has alignof(T) == 16, and thus your code has alignment UB, so it can and does compile to asm that faults. BTW, I wouldn't recommend using this union; especially not for function args / return values since being part of an aggregate can make the calling conventions treat it less efficiently. (On MSVC you have to use vectorcall to get it passed in a register if it doesn't inline, but x86-64 System V passes vector args in vector regs normally, if they aren't part of a union.) Use __m128 vectors and write helper functions to get your data in/out as scalar. Ideally don't use 1 SIMD vector to hold 1 geometry vector, that's kind of an anti-pattern since it leads to a lot of shuffling. Better to have arrays of x, arrays of y, and arrays of z, so you can load 3 vectors of data and process 4 vectors in parallel with no shuffling. (Struct-of-Arrays rather than Array-of-Structs). See https://stackoverflow.com/tags/sse/info especially https://deplinenoise.wordpress.com/2015/03/06/slides-simd-at-insomniac-games-gdc-2015/ Or if you really want to do it this way, you could still improve this. Your struct particle is 36 bytes as you've defined it, with two wasted 32-bit float slots. It could have been 32 bytes: xyz, radius, xyz, zeroed padding, so you could have alignof(particle) == 16 without increasing the size to 48 bytes, to be able to load it efficiently (never spanning cache-line boundaries). The radius would get loaded as high garbage along _mm_load_ps(&particle->pos_x) which gets the x,y,z positions and whatever comes next. You might sometimes have to use an extra instruction to zero out the high element, but probably most of the time you could be shuffling in ways that don't care about it. Actually your struct particle is 48 bytes when you have a __m128 member, since it inherits the alignof(T) from its vec pos and vec vel members, and sizeof(T) has to be a multiple of alignof(T) (so arrays work).
76390159
76392137
I am calling nsExec::ExecToStack to get the output of a powershell command. Based on the result, I will either do nothing, or install a windows feature. I seem to get the result I expect from the powershell command, but the if/then logic is not doing what I expect. Here's the code: DetailPrint "======================" DetailPrint "Checking MSMQ Services" DetailPrint "======================" nsExec::ExecToStack 'powershell.exe -command "(get-windowsfeature -name MSMQ-Services).InstallState"' Pop $0 Pop $InstallState DetailPrint "MSMQ is: $InstallState" ${If} $Installstate == "Installed" DetailPrint "MSMQ is Installed" ${Else} DetailPrint "========================" DetailPrint "Installing MSMQ Services" DetailPrint "========================" nsexec::exectolog 'powershell -command "install-windowsfeature -name msmq-services"' ${EndIf} I set ${DisableX64FSRedirection} in the same section, above this code, to ensure powershell is the 64-bit version and can run the command properly. Here is what I am seeing in the Log: ====================== Checking MSMQ Services ====================== MSMQ is: Installed ======================== Installing MSMQ Services ======================== Success Restart Needed Exit Code Feature Result ------- -------------- --------- -------------- True Yes NoChangeNeeded {} I have been staring at this code, tweaking, trying different logic... Here is what the powershell command returns: PS C:\Windows\system32> $tom=(get-windowsfeature -name MSMQ-Services).InstallState PS C:\Windows\system32> echo $tom Installed First thought was that there were leading or trailing spaces, so I did the comparison with "X$InstallStateX". I tried adding the /OEM flag to ExectoStack. I tried variations of if/then logic: ${If} $Installstate != "Installed" <install command> ${EndIf} Honestly, I'm not sure if this is a stupid logic flaw (on my part) or some funky output from exectostack that I'm not expecting.
Am I using NSIS nsexec::ExecToStack Output correctly?
It was just a CR/LF at the end of the PowerShell output. By changing my statement to: ${If} $InstallState == "Installed$\r$\n" Worked like it should. It's always something simple in the end.
76389426
76392179
How to let know developers automatically that this "bits/shared_ptr.h" is internal to standard library (gcc and clang). #include <bits/shared_ptr.h> // some code using std::shared_ptr The best would be to also inform <memory> should be used instead. This <bits/shared_ptr.h> is just an example - I mean - how to warn about any implementation header being included. By "automatically" I mean - compiler warning or static analysis like clang-tidy. I have tried "-Wall -Wextra -pedantic" and all clang-tidy checks, without llvm* checks - these llvm* warns for almost every header and it is just for llvm developers, not for us, regular developers. Any advice? I prefer existing solution, I know I can write script for that. Ok, I found one check in clang-tidy that I can use. It is portability-restrict-system-includes Just need to specify in config that "bits" things are not allowed: -config="CheckOptions: {portability-restrict-system-includes.Includes: '*,-bits/*,bitset'}" See demo. But, well, it is not perfect solution - one would need to maintain list of "not allowed" headers.
Is there a way to warn C++ developers when they accidentally include internal implementation headers of std library?
It seems like include-what-you-use does what you want. It has a mapping of what names are supposed to come from what header, and it seems to know which headers are internal. For example, when including <bits/shared_ptr.h> https://godbolt.org/z/cvq7354K6: #include <bits/shared_ptr.h> std::shared_ptr<int> x; It says to remove <bits/shared_ptr.h> and add <memory>.
76392017
76393019
I have a PowerBI report with a line chart that shows average costs over a time period. The time period is based on a date that is set to use a date hierarchy for the year and the month. When I open the report the line chart does not display correctly. The lines are incorrectly flat. If I change the date from a date hierarchy to just the date and then back to the hierarchy, then it displays correctly. This also happened when I published the report to the PBI service. What can be done to negate the need to do this? I have tried resetting the field for the X-axis and restarting the application.
How can I fix incorrect flat lines on my PowerBI line chart with a date hierarchy?
Clicking on the double arrow icon to expand the next level by default resolved my issue.
76384859
76392236
I haven't kept up with changes to GCP's load-balancing, and they've introduced a new kind of global L7 load-balancer, and the ones which I am used to are now termed "classic". I am not able to find ways to create these new style of load-balancers using gcloud CLI. Is there a way to do this?
Create Global HTTPS load-balancer with gcloud
Just to mark the question answered, I will post @John_Hanley's reply here: Set the flag like this: gcloud gcloud compute backend-services create --load-balancing-scheme=EXTERNAL_MANAGED
76394247
76394291
I have a JSON file in the below format [ { "name": "John", "team": "NW", "available": true }, { "name": "Dani", "team": "NW", "available": true }, { "name": "Lyle", "team": "NW", "available": false }, { "name": "Dean", "team": "W", "available": true }, { "name": "Lyle", "team": "W", "available": true }, { "name": "David", "team": "W", "available": true }, { "name": "George", "team": "SW", "available": false }, { "name": "Luke", "team": "SW", "available": false } ] and the corresponding Java class like below: public class Agent { private String name; private String team; private Boolean available; ... ... } I need to pair members from the list with other people on the list. The rules are that the pair has to be with a member who is available and is from a different team. Need to list the pairs of the members along with any members which could not be paired (odd number of 'pairable' members/not enough members in other teams). I have written the code to read the JSON in a list of 'Agent' objects using Jackson and have tried the following approaches till now Iterating over the list and getting the first 'available' member and fetching another one from different team (used stream API/filtering to do this) Create a map of agents (key: team, value: agent list) and iterate the EntrySet to pair the agents with agents from another team. Naive approach of creating anew list and adding elements from the original list as and when they are mapped) However, I am getting a lot of agents unmapped. I wanted to write the logic in such a way that it matches/pairs maximum numbers of agents and print the remaining agents in the output alongwith the agents that were paired. Not looking for a full blown solution - any pointers will be greatly appreciated!
Create unique pair of elements in a list based on some attribute of the class/object
To confirm, when running you first, Check if the agent is available. Find a matching agent from a different team (From the map of agents) And if a match is found, add the pair of agents to the list of pairs and mark both agents in the EntrySet, otherwise mark the agent as unmatched. Correct? If so, it sounds like you already have a good solution to the problem, and I would double check your code to ensure you are following these steps properly. A limitation within the data (Such as very unbalanced team sizes), could also cause you to get unbalanced results. If you wanted another approach though, I would suggest using a 'Maximal Matching' Algorithm, like the Hopcroft-Karp algorithm or the Edmonds' Blossom algorithm to find the maximum matching. In your case, the agents would be represented as nodes in the bipartite graph, Node set 1 representing available agents and Node set 2 representing unavailable agents. Then generate the edges between the two sets, where each node would represent a potential pairing. An overview of the algorithm can be found here; https://www.geeksforgeeks.org/hopcroft-karp-algorithm-for-maximum-matching-set-1-introduction/ All you would need to do is generate these nodes and edges, as many implementations of the algorithm can be found online :)
76390788
76393048
I have a tree structure with multiple different projects in Visual Studio Code (VS Code). Each project is in a separate folder within the tree structure. Is there a way to make one of these folders the current workspace folder ad hoc, so that I can easily run the configuration specific to that folder? Is there a way to designate a folder as the active workspace folder temporarily, so that I can execute the configurations specific to that folder without modifying the overall workspace settings? I've come across the concept of multi-root workspaces in VS Code, but I'm not sure if it allows me to achieve what I need.
How can I temporarily set a folder as the workspace in VS Code to run specific configurations?
Is there a way to make one of these folders the current workspace folder ad hoc, so that I can easily run the configuration specific to that folder? As far as I'm aware, the easiest you'll get is using File: Open File... (if you've never opened the folder / workspace) before, or using File: Open Recent... (these are command palette commands and both of which have keybindings, which you can find in the command palette. They're also both available under the "File" menu item) I've come across the concept of multi-root workspaces in VS Code, but I'm not sure if it allows me to achieve what I need. Multi-root workspaces allow you to create a flat list of workspace roots that are all open at once. You can set configuration for all workspaces by putting it in the .code-workspace file, or per-workspace-root by putting it in their .vscode/settings.json files.
76391976
76393123
I have a dataset that looks like this. To give some context, there can multiple user groups (odd number of people in it). Each of the groups can contain multiple users in it. So, within each and every group, I needed to select pairs of users in such a fashion that, A person must not be repeated in any of the pairs, until the entire user list is exhausted. The partial solution below starts pairing users that do not belong to the same group as well. Not sure how to tackle this grouping constraint. group_id user_id 1 a1 1 b1 1 c1 1 d1 2 x1 import pandas as pd import numpy as np df = [[1, 'a1'], [1, 'b1'], [1, 'c1'], [1, 'd1'], [2, 'x1'], [2, 'y1'], [2, 'z1']] df = pd.DataFrame(df, columns=['group_id', 'user_id']) df.head() I have a partial solution after going through numerous questions and answers. This solution starts pairing users that do not belong to the same group as well. Which is not what I want: from itertools import combinations # Even Number of users Required users = df['user_id'].to_list() n = int(len(users) / 2) stages = [] for i in range(len(users) - 1): t = users[:1] + users[-i:] + users[1:-i] if i else users stages.append(list(zip(t[:n], reversed(t[n:])))) print(stages) Not sure how to store the pairs back into a pandas data frame. Expected Output (which was updated later on): For group 1 and group 2, note that there can n number of groups: group_id combinations 1 a1-d1 1 b1-c1 1 a1-c1 1 d1-b1 1 a1-b1 2 x1-x2 2 x2-x3 2 x1-x3 Error while running @mozway's code: This error happens for all inputs: AssertionError Traceback (most recent call last) C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in _finalize_columns_and_data(content, columns, dtype) 981 try: --> 982 columns = _validate_or_indexify_columns(contents, columns) 983 except AssertionError as err: C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in _validate_or_indexify_columns(content, columns) 1029 # caller's responsibility to check for this... -> 1030 raise AssertionError( 1031 f"{len(columns)} columns passed, passed data had " AssertionError: 1 columns passed, passed data had 6 columns The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14172\369883545.py in <module> 24 return stages 25 ---> 26 out = (df.groupby(['group_id'], as_index=False)['user_id'].apply(combine).explode('user_id')) 27 print(out.head()) C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py in apply(self, func, *args, **kwargs) 1421 with option_context("mode.chained_assignment", None): 1422 try: -> 1423 result = self._python_apply_general(f, self._selected_obj) 1424 except TypeError: 1425 # gh-20949 C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py in _python_apply_general(self, f, data, not_indexed_same) 1467 not_indexed_same = mutated or self.mutated 1468 -> 1469 return self._wrap_applied_output( 1470 data, values, not_indexed_same=not_indexed_same 1471 ) C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py in _wrap_applied_output(self, data, values, not_indexed_same) 1025 return self.obj._constructor_sliced(values, index=key_index) 1026 else: -> 1027 result = self.obj._constructor(values, columns=[self._selection]) 1028 self._insert_inaxis_grouper_inplace(result) 1029 return result C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data, index, columns, dtype, copy) 719 # ndarray], Index, Series], Sequence[Any]]" 720 columns = ensure_index(columns) # type: ignore[arg-type] --> 721 arrays, columns, index = nested_data_to_arrays( 722 # error: Argument 3 to "nested_data_to_arrays" has incompatible 723 # type "Optional[Collection[Any]]"; expected "Optional[Index]" C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in nested_data_to_arrays(data, columns, index, dtype) 517 columns = ensure_index(data[0]._fields) 518 --> 519 arrays, columns = to_arrays(data, columns, dtype=dtype) 520 columns = ensure_index(columns) 521 C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in to_arrays(data, columns, dtype) 881 arr = _list_to_arrays(data) 882 --> 883 content, columns = _finalize_columns_and_data(arr, columns, dtype) 884 return content, columns 885 C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in _finalize_columns_and_data(content, columns, dtype) 983 except AssertionError as err: 984 # GH#26429 do not raise user-facing AssertionError --> 985 raise ValueError(err) from err 986 987 if len(contents) and contents[0].dtype == np.object_: ValueError: 1 columns passed, passed data had 6 columns```
Generate a set of non repeating pairs of users in multiple groups
updated answer Using your code here, but applying it per group: def combine(s): users = s.tolist() n = int(len(users) / 2) stages = [] for i in range(len(users) - 1): t = users[:1] + users[-i:] + users[1:-i] if i else users stages.extend([f'{a}-{b}' for a,b in zip(t[:n], reversed(t[n:]))]) return stages out = (df.groupby('group_id', as_index=False)['user_id'] .apply(combine).explode('user_id') ) Output: group_id user_id 0 1 a1-d1 0 1 b1-c1 0 1 a1-c1 0 1 d1-b1 0 1 a1-b1 0 1 c1-d1 1 2 x1-z1 1 2 x1-y1 original answer before question clarirication (incorrect) You can use: from itertools import combinations out = [c for k, g in df.groupby('group_id')['user_id'] for c in combinations(g, 2)] Output: [('a1', 'b1'), ('a1', 'c1'), ('a1', 'd1'), ('b1', 'c1'), ('b1', 'd1'), ('c1', 'd1'), ('x1', 'y1'), ('x1', 'z1'), ('y1', 'z1')]
76390308
76392296
The DSpace OAI-PMH repository exposes an endpoint /hierarchy for API v6, which provides the logical structure of how communities, sub-communities and collections are related. This is documented at https://wiki.lyrasis.org/pages/viewpage.action?pageId=104566810#RESTAPIv6(deprecated)-Hierarchy As v6 will be deprecated, is there a direct replacement for this endpoint in v7? There's no reference to it in the documentation https://wiki.lyrasis.org/display/DSDOC7x/REST+API nor can I see anything equivalent in the live demo: https://api7.dspace.org/server/#/server/api
Is there an equivalent of the /hierarchy API endpoint for DSpace 7.x?
There is no exact replacement in DSpace 7 REST API. But you can retrieve the same information as follows: Start from the "Top level Communities" search endpoint here: https://github.com/DSpace/RestContract/blob/main/communities.md#search-methods This will retrieve all the communities at the top of that hierarchy Then, for each of those top Communities, it's possible to retrieve their sub-communities and sub-collections via the "linked entities" Sub-Communities: https://github.com/DSpace/RestContract/blob/main/communities.md#subcommunities Sub-Collections: https://github.com/DSpace/RestContract/blob/main/communities.md#collections This is how the hierarchy is achieved in the DSpace 7 User Interface when you visit the "/community-list" page, e.g. https://demo7.dspace.org/community-list
76392027
76393168
So, I might not have the right terminology to describe this problem, which made searching for it tricky, but I hope my example helps clear it up. Context: I'm using query data selectors in react-query to preprocess query results and attach some properties that I need globally in the application. I'm running into an issue with the produced types, which I managed to narrow down and reproduce outside of react-query itself. Here's the reproduction code (also on TS Playground) // Data structure coming from third-party code (I have no control over it) // Simplified for a more concise reproduction: type DataStructure = & Array<{ a: number; b: number }> & Array<{ b: number; c: number }> const structure: DataStructure = [ { a: 1, b: 2, c: 3, }, ] structure[0].a // ok structure[0].b // ok structure[0].c // ok for (const s of structure) { s.a // ok s.b // ok s.c // ok } structure.forEach(s => { s.a // ok s.b // ok s.c // Property 'c' does not exist on type '{ a: number; b: number; }'.(2339) }) // If we had control over DataStructure, we'd write it like this instead: // Array<{ a: number; b: number } & { b: number; c: number }> The DataStructure is an intersection type of two Arrays with partially overlapping item types. As demonstrated by the code and the comments, all 3 properties are available when the array items are accessed either by their index or inside a for loop, but inside a forEach loop (or any array method like some, every, etc,) only properties of the first array type (from the intersection) are available. Trying to access c inside the forEach loop, TypeScript complains: Property 'c' does not exist on type '{ a: number; b: number; }'.(2339) Now, if I had control over the data structure definition, I'd describe it like this: type DataStrcture = Array<{ a: number; b: number } & { b: number; c: number }> That would indeed solve the issue. But I don't have control over that part of the code. What I'm more interested in is understanding WHY TypeScript behaves the way it does here. That's what's baffling me the most, but if someone can offer a clean solution, too, that'd be extra amazing!
TypeScript array intersection type: property does not exist when accessed in Array.forEach, Array.some, etc, but accessible within for loop
This is considered a design limitation of TypeScript. Intersections of array types behave strangely and are not recommended. See microsoft/TypeScript#41874 for an authoritative answer. It says: Array intersection is weird since there are many invariants of arrays and many invariants of intersections that can't be simultaneously met. For example, if it's valid to call a.push(x) when a is A, then it should be valid to write ab.push(x) when ab is A & B, but that creates an unsound read on (A & B)[number]. In higher-order the current behavior is really the best we can do; in zero-order it's really preferable to just write Array<A & B>, Array<A> | Array<B>, or Array<A | B> depending on which you mean to happen. Some of the weirdness is due to the fact that intersections of functions and methods behave like overloads, and arrays are unsafely considered covariant in their element type (see Why are TypeScript arrays covariant? ), which means you suddenly have the situation with push() as described above. Other weirdness happens when you try to iterate through them, as you've shown, and as described in microsoft/TypeScript#39693. So the recommended approach is to avoid intersections of arrays, and instead use arrays of intersections if that's what you want. If you can write that out directly, you should. If you have a type with nested intersected arrays you can look at Why does the merge of 2 types with a shared property name not work when making a type with that property from the merged type? for a possible approach to writing a utility type to deal with those. As I mentioned in the comment, if you can't control the data type, you should show this to whoever does so they can fix it. Otherwise, you'll need to work around it by translating between the external type definition and your fixed version of it.
76390991
76393253
I have a list of items, each item has a date value: [ { "Date Merged": "6/1/2023 3:46:53 PM", "PR ID": "470" }, { "Date Merged": "5/30/2023 2:44:25 PM", "PR ID": "447" } ] I want to get only the PRs of the items with dates that happened in May. I think I can grab the 'PR ID' values with: map(attribute='PR ID'), but I don't know how to filter within a certain date range.
From a list, how to get only the items with dates within a certain time period?
Given the data prs: - Date Merged: 6/1/2023 3:46:53 PM PR ID: '470' - Date Merged: 5/30/2023 2:44:25 PM PR ID: '447' Q: "Get the PRs with dates that happened in May." A: There are more options: Quick & dirty. The test match "succeeds if it finds the pattern at the beginning of the string" result: "{{ prs|selectattr('Date Merged', 'match', '5')| map(attribute='PR ID') }}" gives result: - '447' Use the filter to_datetime to get the date objects and create the list of the hashes format: '%m/%d/%Y %I:%M:%S %p' months: "{{ prs|map(attribute='Date Merged')| map('to_datetime', format)| map(attribute='month')| map('community.general.dict_kv', 'month') }}" gives months: - month: 6 - month: 5 Update the dictionaries prs_months: "{{ prs|zip(months)|map('combine') }}" gives prs_months: - Date Merged: 6/1/2023 3:46:53 PM PR ID: '470' month: 6 - Date Merged: 5/30/2023 2:44:25 PM PR ID: '447' month: 5 Select the items and get the ids result: "{{ prs_months|selectattr('month', 'eq', 5)| map(attribute='PR ID') }}" gives the same result result: - '447' Example of a complete playbook for testing - hosts: localhost vars: prs: - Date Merged: 6/1/2023 3:46:53 PM PR ID: '470' - Date Merged: 5/30/2023 2:44:25 PM PR ID: '447' result1: "{{ prs|selectattr('Date Merged', 'match', '5')| map(attribute='PR ID') }}" format: '%m/%d/%Y %I:%M:%S %p' months: "{{ prs|map(attribute='Date Merged')| map('to_datetime', format)| map(attribute='month')| map('community.general.dict_kv', 'month') }}" prs_months: "{{ prs|zip(months)|map('combine') }}" result2: "{{ prs_months|selectattr('month', 'eq', 5)| map(attribute='PR ID') }}" tasks: - debug: var: result1 - debug: var: months - debug: var: prs_months - debug: var: result2
76389963
76392419
I created a memory dump of an application with procdump -ma abc.exe. The application access various files. I run !handle 0 f FILE and get over 100 file handles. When I get a specific handle address, I run the following command !handle 000000000000161c f which results in: 0:000> !handle 000000000000161c f Handle 000000000000161c Type File Attributes 0 GrantedAccess 0x12019f: ReadControl,Synch Read/List,Write/Add,Append/SubDir/CreatePipe,ReadEA,WriteEA,ReadAttr,WriteAttr HandleCount 2 PointerCount 65479 No object specific information available Is there a chance to retrieve the actual file path? Something like that: How get file path by handle in windbg? seems to work only in kernel mode debugging.
WinDbg | Application memory full dump - Show file path of file handle
For post-mortem debugging (crash dump analysis), there's no way, except if you have a kernel dump (I can't tell you how to do that then). Windows will close handles of a process that terminated. !handle combined with !handleex There is handleex on Github. Combine !handle and !handleex to get nice information. .foreach /pS 1 /ps 1 (file {!handle 0 4 FILE}) { .echo Handle file; !handleex file; .echo } Example output 0:007> .foreach /pS 1 /ps 1 (file {!handle 0 4 FILE}) { .echo Handle file; !handleex file; .echo } Handle 4c Object Type: File Handle Name: \Device\HarddiskVolume3\Users\T Handle 8c Object Type: File Handle Name: \Device\HarddiskVolume3\Windows\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_6.0.19041.1110_none_60b5254171f9507e Handle b4 Object Type: File Handle Name: \Device\HarddiskVolume3\Windows\System32\en-US\notepad.exe.mui ... !handle combined with SysInternals Handle You can combine the !handle WinDbg command and the SysInternals Handle tool. Example notepad debugging session: :018> !handle 0 1 FILE Handle 4c Type File Handle 8c Type File Handle b8 Type File Handle 134 Type File ... Combined with the information from the console: C:\...>handle -p notepad.exe | findstr 8C: 8C: File C:\Windows\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_6.0.19041.1110_none_60b5254171f9507e C:\...>handle -p notepad.exe | findstr B8: B8: File C:\Windows\System32\en-US\notepad.exe.mui Automating it doesn't really work well: 0:007> .printf "%d\n",$tpid 2168 0:007> .foreach /pS 1 /ps 1 (file {!handle 0 4 FILE}) { .shell C:\handle.exe -p 2168 | findstr file: } The process seems to be too slow and require too much user input.
76389602
76392485
I'm using a library that uses SKTypeface.FromFamilyName internally to render font on the screen. However, as I found out if the text to display is japanese, korean or chinese, it just prints squares. I tried to add a custom font to my project but I was not able to make SKTypeface.FromFamilyName return anything but NULL with custom fonts. As I have no access to change SKTypeface.FromFamilyName to something else ( at least as far as I know because it's in a private method of a static class - https://github.com/Mapsui/Mapsui/blob/5008d3ab8b0453c27cb487fe6ad3fac87435abbe/Mapsui.Rendering.Skia/LabelRenderer.cs#L277 ), is there any way I can make it return any font for each language (or one per language) that works with these?
How to get SKTypeface.FromFamilyName to return a font for Japanese, Korean, Chinese (Android + iOS, Xamarin.Forms)
Alright, I found a solution for this. This seems to work for me: string fontFamily; switch (Thread.CurrentThread.CurrentUICulture.TwoLetterISOLanguageName.ToLower()) { case "ja": fontFamily = SKFontManager.Default.MatchCharacter('あ').FamilyName; break; case "ko": fontFamily = SKFontManager.Default.MatchCharacter('매').FamilyName; break; case "zh": fontFamily = Thread.CurrentThread.CurrentUICulture.IetfLanguageTag.ToLower() switch { "zh-cn" => SKFontManager.Default.MatchCharacter('实').FamilyName, "zh-tw" => SKFontManager.Default.MatchCharacter('實').FamilyName, _ => null }; break; default: fontFamily = null; break; }
76391957
76393265
I have the following code where the useSearchResults function (it's a React hook, but it doesn't matter) initializes the state based on the argument. config.state can be anything defined in SearchResultsState or a function that returns anything defined in SearchResultsState. The state returned by useSearchResults is defined as Partial<SearchResultsState> because, initially could be anything in that state type. I need state to be a partial SearchResultsState but also whatever type config.state has (an object or the object returned by a function or undefined). I've trying and trying around this but couldn't find a solution. type State = Record<string, any>; interface IWidget { State: State; } interface SearchResultsState extends State { a: number; b: string; } interface SearchResultsWidget extends IWidget { State: SearchResultsState; } export type StateInitializerFunction<W extends IWidget> = () => Partial<W['State']>; export type StateInitializer<W extends IWidget> = StateInitializerFunction<W> | Partial<W['State']>; export type WidgetInitializer<W extends IWidget> = { state?: StateInitializer<W>; }; export type WidgetWrapperResult< W extends IWidget, > = { state: Partial<W['State']>; }; const useSearchResults = ( config: WidgetInitializer<SearchResultsWidget> = {}, ): WidgetWrapperResult<SearchResultsWidget> => { const state = typeof config.state === 'function' ? config.state() : config.state || {}; return { state }; }; const { state: { a, b } } = useSearchResults({ state: { a: 1 } }); // or const { state: { a, b } } = useSearchResults({ state: () => ({ a: 1 }) }); const test = (a: number) => console.log(a); // error test(a); // a is number | undefined and I'm looking to be just number because a is number in state You can play with it here
Function return type depending on argument type
If you want a function's return type to depend on its argument type then you either need to overload it with multiple call signatures, or generic in some number of type parameters. Overloads only work if you have a relatively small number of ways you want to call the function, while generics are more appropriate to represent an arbitrary relationship between input and output. Overloads might work for this use case, if you only want to support the four possible cases for a and b being present or possibly absent, but it's tedious: function useSearchResults( config: { state?: { a: number, b: string } | (() => { a: number, b: string }) } ): { state: { a: number, b: string } }; function useSearchResults( config: { state?: { a: number, b?: string } | (() => { a: number, b?: string }) } ): { state: { a: number, b?: string } }; function useSearchResults( config: { state?: { a?: number, b: string } | (() => { a?: number, b: string }) } ): { state: { a?: number, b: string } }; function useSearchResults( config: { state?: { a?: number, b?: string } | (() => { a?: number, b?: string }) } ): { state: { a?: number, b?: string } }; function useSearchResults( config: WidgetInitializer<SearchResultsWidget> = {}, ): WidgetWrapperResult<SearchResultsWidget> { const state = typeof config.state === 'function' ? config.state() : config.state || {}; return { state }; }; const { state: { a, b } } = useSearchResults({ state: { a: 1 } }); /* (property) state: { a: number; b?: string | undefined; } */ useSearchResults({ state: () => ({ a: 1 }) }).state.a That can quickly get out of hand, though, so you might need to use generics instead. Once you use generics you need to make the generic type arguments inferrable from the inputs, but you can't do that with indexed access types like W["state"] (there was a pull request at ms/TS#20126 which would have made this possible, but it's not part of the language now. A new PR at ms/TS#53017 might be merged eventually, but for now it's not.) So you'll need to refactor significantly to use that type directly. Possibly as shown here: type StateInitializerFunction<S extends State> = () => S; type StateInitializer<S extends State> = StateInitializerFunction<S> | S; type WidgetInitializer<S extends State> = { state?: StateInitializer<S>; }; type WidgetWrapperResult<S extends State> = { state: S; }; const useSearchResults = <S extends Partial<SearchResultsState>>( config: WidgetInitializer<S> = {}, ): WidgetWrapperResult<S & Partial<SearchResultsState>> => { const state = typeof config.state === 'function' ? config.state() : config.state || {}; return { state } as any; }; Here what I've done is replace mentions of W with just its state property type S. I've also replaced the Partial stuff in the definitions and moved it to the function. Now the function useSearchResults accepts a WidgetInitializer<S> for some S that extends Partial<SearchResultsState>, and it returns a WidgetWrapperResults<S & Partial<SearchResultsState>>. That intersection just helps make sure the compiler is still aware of b's existence when you don't pass it. Let's test it: const { state: { a, b } } = useSearchResults({ state: { a: 1 } }); /* (property) state: { a: number; } & Partial<SearchResultsState> */ const test = (a: number) => console.log(a); useSearchResults({ state: () => ({ a: 1 }) }).state.a test(a); Looks good! Playground link to code
76389562
76392514
I have two columns where I want to highlight differences I want to see if the letter in Column A exists in Column B - it should be case-sensitive, because there are C and c in column A If there is a match I want the row preferably turn green - otherwise a OK / NOT OK in column C Thank you in advance
Excel 2016 - compare two columns for match
Copy this Sub to the Worksheet code pane. For this open the Developer tab and click Visual Basic. In the left pane select the worksheet where your datas are, and doubleclick it. Insert the code. Private Sub Worksheet_Change(ByVal Target As Excel.Range) If Target.Column = 1 And InStr(1, Target.Offset(0, 1).Value, Target.Value) > 0 Then Target.EntireRow.Interior.Color = vbGreen Else Target.EntireRow.Interior.ColorIndex = xlColorIndexNone End If If Target.Column = 2 Then If InStr(1, Target.Value, Target.Offset(0, -1)) > 0 Then Target.EntireRow.Interior.Color = vbGreen Else Target.EntireRow.Interior.ColorIndex = xlColorIndexNone End If End If End Sub If the content change in column A or B the respective row color will change.
76382415
76393313
Covariance not estimated in SciPy's Curvefit Here's my dataset: frequency (Hz) brightness (ergs/s/cm^2/sr/Hz) brightness (J/s/m^2/sr/Hz) float64 float64 float64 34473577711.372055 7.029471536390586e-16 7.029471536390586e-19 42896956937.69582 1.0253178228238486e-15 1.0253178228238486e-18 51322332225.44733 1.3544045476166584e-15 1.3544045476166584e-18 60344529880.18272 1.6902073280174815e-15 1.6902073280174815e-18 68767909106.5062 2.0125779972022745e-15 2.0125779972022743e-18 77780126454.10146 2.3148004995630144e-15 2.3148004995630145e-18 ... ... ... 489996752265.52826 3.201319839821188e-16 3.201319839821188e-19 506039097962.6759 2.5968748350997043e-16 2.596874835099704e-19 523273092332.3638 2.0595903864583913e-16 2.0595903864583912e-19 539918248580.7806 1.7237876060575648e-16 1.7237876060575649e-19 557158231134.7507 1.3879848256567381e-16 1.3879848256567383e-19 573803387383.1646 1.0521820452559118e-16 1.0521820452559118e-19 591049358121.42 9.178609330955852e-17 9.178609330955852e-20 I tried to use CurveFit to fit this to Planck's Radiation Law: import numpy as np from scipy.optimize import curve_fit h=6.626*10e-34 c=3*10e8 k=1.38*10e-23 const1=2*h/(c**2) const2=h/k def planck(x,v): return const1*(v**3)*(1/((np.exp(const2*v/x))-1)) popt,pcov= curve_fit(planck, cmb['frequency (Hz)'],cmb['brightness (J/s/m^2/sr/Hz)']) print(popt, pcov) Warning: /tmp/ipykernel_2500/4072287013.py:11: RuntimeWarning: divide by zero encountered in divide return const1*(v**3)*(1/((np.exp((const2)*v/x))-1)) I get popt=1 and pcov=nan. Now the exponential term in the function differs by several orders of magnitude. And some of the values don't permit to approximate the law mathematically. I tried using the logarithmic form of the law but that doesn't work either. How can I overcome this problem?
Why does my Python code using scipy.curve_fit() for Planck's Radiation Law produce 'popt=1' and 'pcov=inf' errors?
A lot of problems here, including that your variables were swapped, you're needlessly redefining physical constants, and your expression was highly numerically unstable. You need to use exp1m instead: import matplotlib.pyplot as plt import numpy as np from scipy.constants import h, c, k from scipy.optimize import curve_fit freq, brightness_erg, brightness_j = np.array(( (34473577711.372055, 7.0294715363905860e-16, 7.0294715363905860e-19), (42896956937.695820, 1.0253178228238486e-15, 1.0253178228238486e-18), (51322332225.447330, 1.3544045476166584e-15, 1.3544045476166584e-18), (60344529880.182720, 1.6902073280174815e-15, 1.6902073280174815e-18), (68767909106.506200, 2.0125779972022745e-15, 2.0125779972022743e-18), (77780126454.101460, 2.3148004995630144e-15, 2.3148004995630145e-18), (489996752265.52826, 3.2013198398211880e-16, 3.2013198398211880e-19), (506039097962.67590, 2.5968748350997043e-16, 2.5968748350997040e-19), (523273092332.36380, 2.0595903864583913e-16, 2.0595903864583912e-19), (539918248580.78060, 1.7237876060575648e-16, 1.7237876060575649e-19), (557158231134.75070, 1.3879848256567381e-16, 1.3879848256567383e-19), (573803387383.16460, 1.0521820452559118e-16, 1.0521820452559118e-19), (591049358121.42000, 9.1786093309558520e-17, 9.1786093309558520e-20), )).T def planck(v: np.ndarray, T: float) -> np.ndarray: return 2*h/c/c * v**3 / np.expm1(h*v/k/T) guess = 2.5, (T,), _ = curve_fit( f=planck, xdata=freq, ydata=brightness_j, p0=guess, method='lm', # bounds=(0.1, np.inf), ) print('T =', T) fig, ax = plt.subplots() v = np.linspace(freq.min(), freq.max(), 500) ax.scatter(freq, brightness_j, label='data') ax.plot(v, planck(v, *guess), label='guess') ax.plot(v, planck(v, T), label='fit') ax.legend() plt.show()
76387822
76394313
I have a snowflake table that has VARCHAR column containing input from api. I have 300+ columns to be flattened and one of them has input like below for a particular row. I need to parse the values from the below (please refer output) and store it as a single row for a particular input row. The number of elements inside the input list might be 1 or more than 1 but the order of the keys present remain same. Input: [ {"active":false,"urls":{"6x6":"https://url.url.com/secure/?size=xsmall&ownerId=B12345&id=123","4x4":"https://url.url.com/secure/?size=small&ownerId=B12345&id=123","22x22":"https://url.url.com/secure/?size=medium&ownerId=B12345&id=123","44x44":"https://url.url.com/secure/?ownerId=B12345&id=123"},"displayName":"Name1,Lname1","emailAddress":"","key":"B12345","name":"B12345","self":"https://url.url.com/rest/api/2/user?username=B12345","timeZone":"India/Mumbai"}, {"active":true,"urls":{"6x6":"https://url.url.com/secure/?size=xsmall&ownerId=A12345&id=456","4x4":"https://url.url.com/secure/?size=small&ownerId=A12345&id=456","22x22":"https://url.url.com/secure/?size=medium&ownerId=A12345&id=456","44x44":"https://url.url.com/secure/?ownerId=A12345&id=456"},"displayName":"Name1,Lname2.","emailAddress":"[email protected]","key":"A12345","name":"A12345","self":"https://url.url.com/rest/api/2/user?username=A12345","timeZone":"India/Mumbai"} {"active":true,"urls":{"6x6":"https://url.url.com/secure/?size=xsmall&ownerId=C12345&id=456","4x4":"https://url.url.com/secure/?size=small&ownerId=C12345&id=456","22x22":"https://url.url.com/secure/?size=medium&ownerId=C12345&id=456","44x44":"https://url.url.com/secure/?ownerId=C12345&id=456"},"displayName":"Name1,Lname3.","emailAddress":"[email protected]","key":"C12345","name":"C12345","self":"https://url.url.com/rest/api/2/user?username=C12345","timeZone":"India/Mumbai"} ] Output I am looking for is: list of values of the keys (nested) [ [false, [https://url.url.com/secure/?size=xsmall&ownerId=B12345&id=123, https://url.url.com/secure/?size=small&ownerId=B12345&id=123, https://url.url.com/secure/?size=medium&ownerId=B12345&id=123, https://url.url.com/secure/?ownerId=B12345&id=123], Name1, Lname1, , B12345, B12345, https://url.url.com/rest/api/2/user?username=B12345, India/Mumbai], [true, [https://url.url.com/secure/?size=xsmall&ownerId=A12345&id=456, https://url.url.com/secure/?size=small&ownerId=A12345&id=456, https://url.url.com/secure/?size=medium&ownerId=A12345&id=456, https://url.url.com/secure/?ownerId=A12345&id=456], Name1, Lname2., [email protected], A12345, A12345,https://url.url.com/rest/api/2/user?username=B12345, India/Mumbai] , [true, [https://url.url.com/secure/?size=xsmall&ownerId=C12345&id=456, https://url.url.com/secure/?size=small&ownerId=C12345&id=456, https://url.url.com/secure/?size=medium&ownerId=C12345&id=456, https://url.url.com/secure/?ownerId=C12345&id=456], Name1, Lname3., [email protected], C12345, C12345,https://url.url.com/rest/api/2/user?username=B12345, India/Mumbai] ] Query that I tried: SELECT CONCAT('[[', REPLACE(GET(flattened.value, 'active'), '"', '') , ', ', '[', listagg(CASE WHEN flattened_nested.value LIKE '%https%' THEN 'https:' || REPLACE(SPLIT_PART(flattened_nested.value, ':', 2), '"', '') ELSE NULL END,',')within group (order by null) ,']', ', ', --<== this causes issue while trying to parse urls values for each elements and to store them. REPLACE(GET(flattened.value, 'displayName'), '"', '') , ', ', REPLACE(GET(flattened.value, 'emailAddress'), '"', '') , ', ', REPLACE(GET(flattened.value, 'key'), '"', '') , ', ', REPLACE(GET(flattened.value, 'name'), '"', '') , ', ', REPLACE(GET(flattened.value, 'self'), '"', '') , ', ', REPLACE(GET(flattened.value, 'timeZone'), '"', ''), ']]') as output_column FROM snowflake_table SRC ,LATERAL FLATTEN(input=>SRC.json_values:"fields":"field_12345") AS flattened --<== the field that contains the above input. ,LATERAL FLATTEN(input=>flattened.value:urls) AS flattened_nested group by flattened.value ; The below output that I get is aggregating all the 3 urls value for a particular input and stores it as comma separated. but, it doesn't take the other values like displayname, key,name, self etc for all 3 elements. It gives me only the first occurrence. The other issue with this approach is if I need to include other 300+ columns, I have place everything in group by clause. [ [false, [https://url.url.com/secure/?size=xsmall&ownerId=B12345&id=123, https://url.url.com/secure/?size=small&ownerId=B12345&id=123, https://url.url.com/secure/?size=medium&ownerId=B12345&id=123, https://url.url.com/secure/?ownerId=B12345&id=123,https://url.url.com/secure/?size=xsmall&ownerId=A12345&id=456, https://url.url.com/secure/?size=small&ownerId=A12345&id=456, https://url.url.com/secure/?size=medium&ownerId=A12345&id=456, https://url.url.com/secure/?ownerId=A12345&id=456,https://url.url.com/secure/?size=xsmall&ownerId=C12345&id=456, https://url.url.com/secure/?size=small&ownerId=C12345&id=456, https://url.url.com/secure/?size=medium&ownerId=C12345&id=456, https://url.url.com/secure/?ownerId=C12345&id=456], Name1, Lname1, , B12345, B12345, https://url.url.com/rest/api/2/user?username=B12345, India/Mumbai] ] Can anyone please let me know how to get the desired output with any different approach irrespective of the number of input elements inside the list?
Snowflake Flatten and parsing values
This won't produce the nested JSON, but it should I believe pick-out the values you want, and from there you should be able to form the JSON from it: SELECT t.value:active::BOOLEAN AS active , ARRAY_AGG(t.value:urls) WITHIN GROUP (ORDER BY seq) AS urls , SPLIT_PART(t.value:displayName::STRING, ',', 1) AS firstName , SPLIT_PART(t.value:displayName::STRING, ',', 2) AS lastName , t.value:emailAddress::STRING AS emailAddress , t.value:key::STRING AS key , t.value:name::STRING AS name , t.value:self::STRING AS self , t.value:timeZone::STRING AS timeZone FROM snowflake_table , LATERAL FLATTEN(input => json_values) t GROUP BY active, firstName, lastName, emailAddress, key, name, self, timeZone; An alternative syntax via JSON_PARSE is also possible I feel: SELECT json_values:active::BOOLEAN AS active , ARRAY_AGG(json_values:urls) AS urls , json_values:displayName::STRING AS displayName , json_values:emailAddress::STRING AS emailAddress , json_values:key::STRING AS key , json_values:name::STRING AS name , json_values:self::STRING AS self , json_values:timeZone::STRING AS timeZone FROM snowflake_table , LATERAL JSON_PARSE(json_values) AS json_values GROUP BY active, displayName, emailAddress, key, name, self, timeZone; You may want to consider both for ease of application of the final step and relative performance. It does feel like using JSON_VALUES is easier to understand but the difference in that is very a=marginal.
76394361
76394377
For instance, I'm writing a Mongoose utility method, and I'm wanting to return a value after the async methods resolve. This is a highly simplified example. const testConnect = () => { let msg; mongoose.connect(mongoServer.getUri()) .then(() => { msg ="Connection success!"; }) .catch((err) => { msg = "Connection failure! Error: " + err; }); return Promise.resolve(msg); }; My test buildup utilizes this method in a matter like this. beforeAll(() => { iMDT.testConnect() .then((result) => { console.log(result); }) .catch((err) => { console.log(err); }); }); Is there a simpler, more organic way to return that value as a resolved promise - without using the prototype Promise? It feels kludgy to me.
A more organic way to resolve and return a resolved promise from a JavaScript/Node function than using Promise.resolve?
What you're attempting to do won't work because your asynchronous operation is non-blocking so you will do return Promise.resolve(msg); before there's even a value in msg. It's almost always a warning sign when you're assigning higher scoped variables inside of a .then() or .catch() handler and then trying to use those variables at the top scope. That won't work because of timing issues (attempting to use the variable before its value is set). Instead, you can just return the promise directly. The general advice is to use the promises you already have and not create new ones unnecessarily. Return the one you already have like this: const testConnect = () => { return mongoose.connect(mongoServer.getUri()).then(() => { // set the final resolved value return "Connection success!"; }).catch((err) => { // set the final rejected reason object throw new Error("Connection failure! Error:" + err.message, { cause: err }); }); }; Note, how the failure code path returns a rejected promise that contains a human readable message and also contains the original error object as the cause. This is the usual way to use promises, not to hide the rejection by returning a different resolved value. This allows the caller to more easily know if the operation succeeded or failed without having to compare to specific resolved values. And, this is also what your beforeAll() code block is expecting. It is expecting a rejected promise if the operation fails which is not what your first code block was doing.
76384965
76392645
What's the behavior of git push --force when no upstream branch exists? Will I get something like fatal: The current branch branch_name has no upstream branch, as would happen with a normal push, or would the upstream branch be "forcefully" created?
Behavior of `git push --force` when no upstream branch exists
--force does not change the behaviour of git push without an upstream set (when no push.default and push.autoSetupRemote config is set) empirically with git 2.40.0. $ git checkout -b dev/test Switched to a new branch 'dev/test' $ git push fatal: The current branch dev/test has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin dev/test To have this happen automatically for branches without a tracking upstream, see 'push.autoSetupRemote' in 'git help config'. $ git push --force fatal: The current branch dev/test has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin dev/test To have this happen automatically for branches without a tracking upstream, see 'push.autoSetupRemote' in 'git help config'.